<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
	<title>Adaptive XAI</title>
	<subtitle>Adaptive eXplainable AI - Towards Intelligent Interfaces for Tailored AI Explanations</subtitle>
	
	<link href="https://axai.trx.li/feed/" rel="self"/>
	<link href="https://axai.trx.li"/>
	<updated>2025-10-17T10:36:29Z</updated>
	<id>https://axai.trx.li</id>
	<author>
		<name>Tommaso Turchi</name>
	</author>
	
	<entry>
		<title>Accepted Papers</title>
		<link href="https://axai.trx.li/accepted-papers/"/>
		<updated>2025-10-17T10:36:29Z</updated>
		<id>https://axai.trx.li/accepted-papers/</id>
		<content type="html">&lt;p&gt;These papers will be presented at AXAI 2025, held on March 24, 2025 in Cagliari, Italy, during &lt;a href=&quot;https://iui.acm.org/2025/&quot;&gt;IUI 2025&lt;/a&gt;. The proceedings will be published in the joint IUI 2025 Workshop Proceedings through CEUR-WS.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/1.pdf&quot;&gt;&lt;strong&gt;A User-in-the-loop Digital Twin for Energy Consumption Prediction in Smart Homes&lt;/strong&gt;&lt;/a&gt;
Davide Guizzardi, Barbara Rita Barricelli, and Daniela Fogli&lt;br /&gt;
&lt;em&gt;University of Brescia&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/2.pdf&quot;&gt;&lt;strong&gt;&amp;quot;Loss in Value&amp;quot;: What it revealed about WHO an explanation serves well and WHEN&lt;/strong&gt;&lt;/a&gt;
Md Montaser Hamid, Jonathan Dodge, Andrew Anderson, and Margaret Burnett&lt;br /&gt;
&lt;em&gt;Oregon State University, Pennsylvania State University, and IBM Research&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/3.pdf&quot;&gt;&lt;strong&gt;Human Factors in Human-Feature-Integration&lt;/strong&gt;&lt;/a&gt;
Yixin Li, Lucas Lefebvre, Sonali Parbhoo, Finale Doshi-Velez, and Isaac Lage&lt;br /&gt;
&lt;em&gt;Colby College, Imperial College London, and Harvard University&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://arxiv.org/abs/2503.04343&quot;&gt;&lt;strong&gt;Talking Back - human input and explanations to interactive AI systems&lt;/strong&gt;&lt;/a&gt;
Alan Dix, Tommaso Turchi, Ben Wilson, Anna Monreale, and Matt Roach&lt;br /&gt;
&lt;em&gt;Cardiff Metropolitan University, University of Pisa, and Swansea University&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/5.pdf&quot;&gt;&lt;strong&gt;Explainable Biomedical Claim Verification with Large Language Models&lt;/strong&gt;&lt;/a&gt;
Siting Liang and Daniel Sonntag&lt;br /&gt;
&lt;em&gt;German Research Center for Artificial Intelligence&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://arxiv.org/abs/2410.07997&quot;&gt;&lt;strong&gt;APOLLO: A GPT-based tool to detect phishing emails and generate explanations that warn users&lt;/strong&gt;&lt;/a&gt;
Giuseppe Desolda, Francesco Greco, and Luca Viganò&lt;br /&gt;
&lt;em&gt;University of Bari and King&#39;s College London&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/7.pdf&quot;&gt;&lt;strong&gt;Context-dependent Explainable Daily Automations&lt;/strong&gt;&lt;/a&gt;
Simone Gallo, Sara Maenza, Andrea Mattioli, and Fabio Paternò&lt;br /&gt;
&lt;em&gt;ISTI-CNR&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/8.pdf&quot;&gt;&lt;strong&gt;XFERa: Xplainable Emotion Recognition for improving transparency and trust&lt;/strong&gt;&lt;/a&gt;
Vito Nicola Losavio, Berardina De Carolis, Nicola Macchiarulo, Corrado Loglisci, Maria Grazia Miccoli, and Giuseppe Palestra&lt;br /&gt;
&lt;em&gt;University of Bari&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/9.pdf&quot;&gt;&lt;strong&gt;Explainable Artificial Intelligence Across Various Scales of Interaction and Experience, From Wearable to Ambient&lt;/strong&gt;&lt;/a&gt;
Radu-Daniel Vatavu&lt;br /&gt;
&lt;em&gt;Stefan cel Mare University of Suceava&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/10.pdf&quot;&gt;&lt;strong&gt;Interactive Visual Exploration of Latent Spaces for Explainable AI: Bridging Concepts and Features&lt;/strong&gt;&lt;/a&gt;
Carlo Metta, Eleonora Cappuccio, and Salvatore Rinzivillo&lt;br /&gt;
&lt;em&gt;ISTI-CNR and University of Pisa&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/11.pdf&quot;&gt;&lt;strong&gt;Mitigating Misleadingness in LLM-Generated Natural Language Explanations for Recommender Systems: Ensuring Broad Truthfulness Through Factuality and Faithfulness&lt;/strong&gt;&lt;/a&gt;
Ulysse Maes, Lien Michiels, and Annelien Smets&lt;br /&gt;
&lt;em&gt;imec-SMIT, Vrije Universiteit Brussel&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/12.pdf&quot;&gt;&lt;strong&gt;Human-Centered Design for Accessible and Sustainable XAI in Healthcare&lt;/strong&gt;&lt;/a&gt;
Giovanni Arras, Tommaso Turchi, Giuseppe Prencipe, and Giuseppina Sgandurra&lt;br /&gt;
&lt;em&gt;University of Pisa&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://axai.trx.li/papers/13.pdf&quot;&gt;&lt;strong&gt;Toward a Human-Centered Metric for Evaluating Trust in Artificial Intelligence Systems&lt;/strong&gt;&lt;/a&gt;
Andrea Esposito, Giuseppe Desolda, and Rosa Lanzilotti&lt;br /&gt;
&lt;em&gt;University of Bari Aldo Moro&lt;/em&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Special Issue&lt;/h3&gt;
&lt;p&gt;Selected papers will be invited for submission in expanded form to a joint special issue together with invited papers from the &lt;a href=&quot;https://synergy.trx.li/&quot;&gt;SYNERGY&lt;/a&gt; workshop on &amp;quot;Designing and Building Hybrid Human–AI Systems&amp;quot;. Further instructions regarding formatting and the review/publication process will be provided when the invitations are sent.&lt;/p&gt;
</content>
	</entry>
</feed>