These papers will be presented at AXAI 2025, held on March 24, 2025 in Cagliari, Italy, during IUI 2025. The proceedings will be published in the joint IUI 2025 Workshop Proceedings through CEUR-WS.
-
A User-in-the-loop Digital Twin for Energy Consumption Prediction in Smart Homes
Davide Guizzardi, Barbara Rita Barricelli, and Daniela Fogli
University of Brescia
-
"Loss in Value": What it revealed about WHO an explanation serves well and WHEN
Md Montaser Hamid, Jonathan Dodge, Andrew Anderson, and Margaret Burnett
Oregon State University, Pennsylvania State University, and IBM Research
-
Human Factors in Human-Feature-Integration
Yixin Li, Lucas Lefebvre, Sonali Parbhoo, Finale Doshi-Velez, and Isaac Lage
Colby College, Imperial College London, and Harvard University
-
Talking Back - human input and explanations to interactive AI systems
Alan Dix, Tommaso Turchi, Ben Wilson, Anna Monreale, and Matt Roach
Cardiff Metropolitan University, University of Pisa, and Swansea University
-
Explainable Biomedical Claim Verification with Large Language Models
Siting Liang and Daniel Sonntag
German Research Center for Artificial Intelligence
-
APOLLO: A GPT-based tool to detect phishing emails and generate explanations that warn users
Giuseppe Desolda, Francesco Greco, and Luca ViganĂ²
University of Bari and King's College London
-
Context-dependent Explainable Daily Automations
Simone Gallo, Sara Maenza, Andrea Mattioli, and Fabio PaternĂ²
ISTI-CNR
-
XFERa: Xplainable Emotion Recognition for improving transparency and trust
Vito Nicola Losavio, Berardina De Carolis, Nicola Macchiarulo, Corrado Loglisci, Maria Grazia Miccoli, and Giuseppe Palestra
University of Bari
-
Explainable Artificial Intelligence Across Various Scales of Interaction and Experience, From Wearable to Ambient
Radu-Daniel Vatavu
Stefan cel Mare University of Suceava
-
Interactive Visual Exploration of Latent Spaces for Explainable AI: Bridging Concepts and Features
Carlo Metta, Eleonora Cappuccio, and Salvatore Rinzivillo
ISTI-CNR and University of Pisa
-
Mitigating Misleadingness in LLM-Generated Natural Language Explanations for Recommender Systems: Ensuring Broad Truthfulness Through Factuality and Faithfulness
Ulysse Maes, Lien Michiels, and Annelien Smets
imec-SMIT, Vrije Universiteit Brussel
-
Human-Centered Design for Accessible and Sustainable XAI in Healthcare
Giovanni Arras, Tommaso Turchi, Giuseppe Prencipe, and Giuseppina Sgandurra
University of Pisa
-
Toward a Human-Centered Metric for Evaluating Trust in Artificial Intelligence Systems
Andrea Esposito, Giuseppe Desolda, and Rosa Lanzilotti
University of Bari Aldo Moro
Selected papers will be invited for submission in expanded form to a joint special issue together with invited papers from the SYNERGY workshop on "Designing and Building Hybrid Human–AI Systems". Further instructions regarding formatting and the review/publication process will be provided when the invitations are sent.