Adaptive XAI

Towards Intelligent Interfaces for Tailored AI Explanations


This page describes the IUI 2024 Workshop on Adaptive eXplainable AI.

As the integration of Artificial Intelligence into daily decision-making processes intensifies, the need for clear communication between humans and AI systems becomes crucial. The Adaptive XAI (AXAI) workshop focuses on the design and development of intelligent interfaces that can adaptively explain AI's decision-making processes and our engagement with those processes.

In line with the human-centric principles of the Future Artificial Intelligence Research (FAIR) project, this workshop seeks to explore, understand and develop interfaces that dynamically adapt, thereby creating explanations of AI-based systems that both relate to and resonate with a range of users with different explanation-based requirements. As AI's role in our lives becomes ever more embedded, the ways in which such systems explain elements about the system need to be malleable and responsive to the ever-evolving individual's cognitive state, relating to contextual needs/focus and to the social setting.

For instance, easy to use and effective interaction modalities like Visual Languages can provide users with intuitive mechanisms to interact with, adjust, and reshape AI narratives. This ensures that a richer, more tailored understanding can be provided, allowing explanations to emerge in line with the users' demands and the ever-shifting contexts they find themselves in, both as individuals and as part of a group.

The Adaptive XAI workshop extends an invitation to scholars, designers, and technologists to collaboratively shape the future of human-XAI interplay.

Topics and Themes

The topics include but are not limited to:

  1. Understanding AI Decisions: Exploring methods and technologies for making AI decision-making processes transparent and comprehensible to users.
  2. Human-Centric AI Design: Focusing on the design of AI systems that prioritize human needs, preferences, and cognitive abilities in the context of understanding AI behavior.
  3. Adaptive Explanation Interfaces: Developing interfaces that can adapt their explanatory content to suit different user needs, learning styles, and situational contexts.
  4. Cognitive Needs and Situational Awareness: Addressing how AI explanations can cater to the cognitive needs of users and enhance their situational awareness in various domains.
  5. Seamless AI-HCI Integration: Discussing strategies for integrating AI capabilities with human-computer interaction in a way that is intuitive and beneficial for users.
  6. Diversity and Inclusivity in AI Explanations: Considering how AI explanations can be made accessible and useful for a diverse range of users with varying backgrounds and abilities.
  7. Ethical Considerations in AI Explanations: Delving into the ethical implications of AI explanations, including issues of transparency, accountability, and bias.
  8. Case Studies and Real-World Applications: Presenting case studies or real-world examples where adaptive AI explanations have been effectively implemented.
  9. Future Directions in Adaptive XAI: Exploring emerging trends, potential challenges, and future directions in the field of adaptive explainable AI.
  10. Cross-Disciplinary Perspectives: Encouraging discussions that bring together insights from different disciplines, such as psychology, design, computer science, and ethics, to enrich the understanding of adaptive AI explanations.

Contributing Your Work

Submissions should be between 5 and 10 pages long, following the CEUR-WS instructions for single column papers.

The deadline for submissions is January 16, 2024 AoE.

Submission website: Microsoft CMT

Additionally, selected papers will be invited for submission in expanded form to a Special Issue in the Springer journal "Personal and Ubiquitous Computing" (PAUC). The topic of the special issue will be Hybrid Human-AI Systems. Further instructions regarding formatting and the review/publication process will be provided when the invitations are sent.

Please send any comments or questions to Tommaso Turchi,


Tommaso Turchi is an Assistant Professor at the University of Pisa (Italy). His research focuses on Human-Centered AI and End-User Development. He has worked on various research projects related to the interaction with AI systems and is currently investigating the use of Design Fiction for AI-as-a-service applications in the medical field. His most recent work includes the development of a co-design toolkit to identify and address bias in ML-based collaborative decision-making domains.

Alessio Malizia is an Associate Professor at the University of Pisa (Italy). His research focuses on Human-Centered AI and Design Fictions. He's involved in different National and International projects developing novel approaches for improving scientific methods to study Human-Artificial Intelligence Interaction.

Fabio Paternò is Research Director at CNR-ISTI in Pisa (Italy). His research activity has mainly been carried out in the HCI field, with the goal to introduce computational support to improve usability, accessibility, and user experience for all in the various possible contexts of use by proposing relevant languages, models, design spaces, tools, and applications.

Simone Borsci is an Associate Professor of Human Factors and Cognitive Ergonomics at the University of Twente (Netherlands). His research spans across Human factors and ergonomics, interaction with technology and artefacts, usability and accessibility studies, and user experience analysis in ubiquitous computing contexts.

Alan Chamberlain is a Senior Research Fellow at the University of Nottingham (United Kingdom). His research is based on Human-Computer Interaction, Ethnography, Action Research, Participatory Design, and User Engagement in order to develop networks of people who are able to involve themselves in the practices of innovation and design.

Program Committee


We would like to acknowledge the support of the PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013 - “FAIR - Future Artificial Intelligence Research” - Spoke 1 “Human-centered AI”, funded by the European Commission under the NextGeneration EU programme. This work was also supported by the Engineering and Physical Sciences Research Council [grant number EP/V00784X/1] UKRI Trustworthy Autonomous Systems Hub (The TAS RRI II project), [grant number EP/G065802/1] Horizon: Digital Economy Hub at the University of Nottingham (HoRRIzon III), and [grant number EP/Y009800/1] AI UK: Creating an International Ecosystem for Responsible AI Research and Innovation (RAI UK), (RAKE Responsible Innovation Advantage in Knowledge Exchange).

We acknowledge the involvement of the STAHR Collective.