Fair & Responsible AI Workshop @ CHI2020

Trust Evolution Over Time in Explainable AI for Fake News Detection


Workshop paper


Sina Mohseni, Fan Yang, Shiva Pentyala, Mengnan Du, Yi Liu, Nic Lupfer, Xia Hu, Shuiwang Ji, Eric Ragan

Abstract
The need for interpretable and accountable intelligent systems is strong as artificial intelligence (AI) becomes more prevalent in human life. We study the effects of interpretability on user's trust in an AI assistant tool designed for fake news detection. In our study, we expose participants to different types of AI and Explainable AI (XAI) assistants, measure their perceived accuracy of algorithm, and cluster user trust changes over time into five types of trust evolution. We present quantitative results and analysis from human-subject studies and discuss our findings regarding how model explanations affect on user trust evolution over time.

PDF

Cite

APA
Mohseni, S., Yang, F., Pentyala, S., Du, M., Liu, Y., Lupfer, N., … Ragan, E. Trust Evolution Over Time in Explainable AI for Fake News Detection.

Chicago/Turabian
Mohseni, Sina, Fan Yang, Shiva Pentyala, Mengnan Du, Yi Liu, Nic Lupfer, Xia Hu, Shuiwang Ji, and Eric Ragan. Trust Evolution Over Time in Explainable AI for Fake News Detection, n.d.

MLA
Mohseni, Sina, et al. Trust Evolution Over Time in Explainable AI for Fake News Detection.