HI-AI@KDD24: Human Interpretable AI Workshop @ KDD2024 Centre de Convencions Internacional de Barcelona Barcelona, Spain, August 26, 2024 |
Conference website | https://human-interpretable-ai.github.io/ |
Abstract registration deadline | June 14, 2024 |
Submission deadline | June 14, 2024 |
CALL FOR PAPERS FOR HI-AI@KDD24 WORKSHOP
We are excited to announce the Workshop on Human-Interpretable AI. It will be held on August 26th, 2024, in conjunction with the prestigious KDD 2024 conference in Barcelona, Spain.
In this workshop, we invite researchers working on topics related to Human-Interpretable AI (HI-AI) to submit short research papers (up to six pages) describing new contributions, position papers, as well as already published papers in top conferences of the field. The deadline for the paper submissions is May 28th, 2024.
Submission Guidelines
Authors are invited to submit short papers, limited to six pages excluding references and an optional appendix. Submissions include new research papers presenting novel findings and/or theoretical analyses, as well as position papers aimed at starting an active discussion on topics related to HI-AI. We also welcome summaries of already published papers from leading A-conferences and Q1-journals of the field. The optional Appendix has no page limit, but we encourage authors to use this space sparingly. For papers presenting novel empirical results, we kindly ask authors to provide access to the code and data underpinning their work (when possible) to ensure reproducibility. All paper submissions should follow the **CEUR-WS format** of the HI-AI workshop that you can find on the workshop website.
Review Process & Proceedings
All submissions will be peer-reviewed through a double-blinded process. Therefore, authors must ensure that their submissions are properly anonymized. To facilitate this process, we will use OpenReview to manage the submission and review process, guaranteeing that final decisions are made without any conflicts of interest.
All papers accepted for the workshop will be published on the official workshop website, ensuring they remain available and accessible beyond the duration of the conference. For authors **interested in an archival version**, arrangements have been made with the external editor CEUR.WS to provide this service. This, however, is optional, and authors may opt out of having their paper included in these proceedings if they wish to submit part of their submission to future archived venues. Furthermore, we will consider an extension of some of the top accepted papers for a special issue on the workshop’s topic.
Attendance
Each accepted paper requires at least one author to be present at the workshop for poster presentation. Note that KDD provides an option to attend only the workshop with the affordable one-day pass. Additionally, a few selected papers will have the chance to provide a short contributing talk during the workshop. A best paper award will also be presented during the event. You will also have the opportunity to meet leading experts in the field, who will provide different invited speeches during the workshop.
Topics of interest
The following is a non-exhaustive list of possible contributions. If you believe your paper is still related to interpretability but does not fit into any of the following topics, please send it anyway, and we will then evaluate whether it may be still considered.
- Explainable-by-design models. Novel approaches designing machine learning and deep learning models that are intrinsically interpretable. Also, papers showing novel characteristics (i.e., higher trustworthiness, robustness, causality, etc.) of existing models or extending them to novel domains are appreciated.
- Post-hoc methods for Interpretable AI. Novel approaches on post-hoc interpretable AI. These include but are not limited to approaches working on higher-level features such as concepts. As for explainable-by-design models, papers showing novel characteristics of existing models or extensions to novel domains are welcome.
- Theoretical analyses of existing methods. Papers showing from a theoretical point of view what existing interpretable methods can achieve both from an explanation and a generalization point of view.
- Knowledge integration & Reasoning methods. Methods injecting domain knowledge or integrating expert systems and reasoning methods into deep learning models to enhance their interpretability and performance.
- Ethical AI. Papers analyzing implications of interpretable AI methods, discussing topics such as fairness, accountability, transparency, and bias mitigation in AI systems.
- Human-machine Interaction. Studies on innovative human-machine interaction systems successfully exploiting interpretable AI models in their capability to provide both standard and counter-factual explanations.
- Position papers on XAI. Papers discussing the possible evolutions of the XAI field or speculating potential interpretable systems and applications with their implications.
- Applications in Medicine and Healthcare. Applications of interpretable AI methods in medical diagnosis, treatment planning, and healthcare decision-making. Case studies demonstrating the clinical utility of interpretable AI models are welcome.
- AI in Industry. Practical applications of interpretable AI methods in various safety-critical industrial sectors, such as transportation, finance, and retail. We welcome case studies, as well as discussions on the challenges of integrating interpretable AI technologies into existing decision-making processes.
- Legal and Regulatory dissertations. Papers discussing and providing analysis of the legal challenges associated with interpretable AI, including compliance with data protection laws, liability issues, and existing regulatory requirements for transparent and accountable AI systems.
Organizing committee
- Gabriele Ciravegna, Researcher at Politecnico di Torino, Italy
- Mateo Espinosa Zarlenga, PhD student and Gates scholar at the University of Cambridge, UK
- Pietro Barbiero, Post-doctoral researcher at the Università della Svizzera Italiana, Switzerland
- Zohreh Shams, Senior scientist at Builder.ai, UK
- Francesco Giannini, Research fellow at CINI (National Interuniversity Consortium for Informatics), Italy
- Damien Garreau, Associate professor at Université Côte d'Azur, France
- Mateja Jamnik, Professor of Artificial Intelligence at the University of Cambridge, UK
- Tania Cerquitelli, Professor at DAUIN, Politecnico di Torino, Italy
Contact
- Workshop website: https://human-interpretable-ai.github.io/
- Submission site: https://openreview.net/group?id=KDD.org/2024/Workshop/HI-AI
- Contact email: human.interpretable.ai@gmail.com