Fair & Responsible AI Workshop @ CHI2020

Need for Organizational Performance Metrics to Support Fairness in AI


Workshop paper


Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, Hanna Wallach

Abstract
Following the announcements of dozens of AI ethics statements and high-level principles for responsible AI, technologists are beginning to operationalize values such as fairness into metrics, toolkits, and checklists to impact AI product development. However, while individual AI practitioners may want to use such methods to develop more fair and responsible AI products, there are organizational incentives inhibiting individuals from advocating for and addressing fairness issues. In this workshop paper, we present new findings from an AI fairness checklist co-design research project [6] that suggest directions and open questions for developing organizational performance metrics to support AI fairness efforts, focusing on the challenges of conceptualizing and designing fairness metrics that are both effective and legible to organizations. We intend for this paper to spark discussion in the community around aligning organizational culture to support responsible AI development.

PDF

Cite

APA
Madaio, M. A., Stark, L., Vaughan, J. W., & Wallach, H. Need for Organizational Performance Metrics to Support Fairness in AI.

Chicago/Turabian
Madaio, Michael A., Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. Need for Organizational Performance Metrics to Support Fairness in AI, n.d.

MLA
Madaio, Michael A., et al. Need for Organizational Performance Metrics to Support Fairness in AI.