Harnessing Explanations to Bridge AI and Humans


Workshop paper


Vivian Lai, Samuel Carton, Chenhao Tan

Cite

Cite

APA   Click to copy
Lai, V., Carton, S., & Tan, C. Harnessing Explanations to Bridge AI and Humans.


Chicago/Turabian   Click to copy
Lai, Vivian, Samuel Carton, and Chenhao Tan. Harnessing Explanations to Bridge AI and Humans, n.d.


MLA   Click to copy
Lai, Vivian, et al. Harnessing Explanations to Bridge AI and Humans.


BibTeX   Click to copy

@techreport{vivian-a,
  title = {Harnessing Explanations to Bridge AI and Humans},
  author = {Lai, Vivian and Carton, Samuel and Tan, Chenhao}
}

Abstract
Machine learning models are increasingly integrated into societally critical applications such as recidivism prediction and medical diagnosis, thanks to their superior predictive power. In these applications, however, full automation is often not desired due to ethical and legal concerns. The research community has thus ventured into developing interpretable methods that explain machine predictions. While these explanations are meant to assist humans in understanding machine predictions and thereby allowing humans to make better decisions, this hypothesis is not supported in many recent studies. To improve human decision-making with AI assistance, we propose future directions for closing the gap between the efficacy of explanations and improvement in human performance.

PDF

Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in