5 Great Human-Centered AI Papers from 2018

“For every generation, there is a destiny. For some, history decides. For this generation, the choice must be our own.” — Lyndon Johnson

Countering popular views of Artificial Intelligence that it will conquer humanity or make us irrelevant, there is a building alternate narrative: the possibility of a deeply human-focused AI destiny. Human centered AI (HAI) seeks to advance AI by leveraging our understanding about human intelligence; create AI systems to make progress on critical challenges facing humanity; and analyze and transform the AI we invent by understanding its impact on society.

As we enter 2019, I asked some of the leaders in AI and ML to share an HAI paper they thought was especially important or insightful from 2018. This research spans from fairness and accountability to the benefit of deep learning for healthcare, and highlights some of the key ideas of HAI in 2018, helping lay the foundations for what I and many others (Stanford HAI) believe is an extremely important and exciting path forward for AI.

Their choices are:

Paper: Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding
In: Conference on Advances in Neural Information Processing Systems 2018
Authors: K Yi (Harvard), J Wu (MIT), C Gan (MIT, IBM Watson), A Torralba (MIT), P Kohli (DeepMind), J Tenenbaum (MIT)
Main contribution: By integrating scene parsing with program synthesis from natural language, the authors develop a system that achieves near-perfect performance on a difficult visual question answering task.
Nominator: Sam Gershman (Harvard).
Why? Humans are able to separately reason about the world, understand natural language queries, and parse visual images; they can acquire these abilities with sparsely labeled examples; and they can communicate their scene interpretations to other people. These facts are important for building human-like question-answering systems, because most existing systems lack the flexibility and sample-efficiency of humans. Yi and colleagues take a step towards closing the gap between artificial and human intelligence, by disentangling vision, language understanding, and reasoning.

Paper: Modeling Polypharmacy Side Effects with Graph Convolutional Networks
In: Bioinformatics 2018
Authors: Marinka Zitnik (Stanford), Monica Agrawal (Stanford), and Jure Leskovec (Stanford)
Main contribution: The authors leverage pharmacogenomic databases on interactions among proteins and of medications and proteins and employ a methodology called graph convolutional networks to capture a joint representation of protein-protein interactions, drug-protein target interactions, and side effects arising from the interaction among multiple medications taken by patients. They predict clinically manifested side effects, with significant improvement on baselines.
Nominator: Eric Horvitz (Director, Microsoft Research Labs)
Why? The authors demonstrate the value of learning new representations for a key healthcare challenge: automatically identifying the potential for adverse effects of interactions among multiple medications taken by patients. Their inferred representations can flag and prioritize potential side effects for follow-up analyses, and their technical developments are also promising for many other disciplines.

Paper: Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
In: CHI Conference on Human Factors in Computing Systems 2018
Authors: Michael Veale (UCL), Max Van Kleek (Oxford), Reuben Binns (Oxford)
Main contribution: Veale, Van Kleek, and Binns interviewed 27 public sector machine learning practitioners working in high-stakes domains like predictive policing and child mistreatment detection about the challenges that they face in attempting to create fair machine learning systems. Their work uncovers disconnects between the real-world challenges that arise in the public sector and those commonly presumed in the Fairness Accountable and Transparent Machine Learning (FAT ML) literature.
Nominator: Jenn Wortman Vaughan (Senior Researcher, MSR)
Why? Fairness, accountability, and transparency in machine learning are hot topics these days, and hundreds of new academic papers on these topics were published this year alone. However, this research is rarely guided by an understanding of the daily challenges faced by machine learning practitioners. This paper should be required reading for anyone who wants to work on fair machine learning, and it has already had a huge impact on my own research.

Paper: The Moral Machine Experiment
In: Nature 2018
Authors: E Awad (MIT), S Dsouza (MIT), R Kim R (MIT), J Schulz (Harvard), J Henrich (Harvard), A Shariff (U of British Columbia), JF Bonnefon (Universitie Toulouse Capitole), I Rahwan (MIT).
Main contribution: The goal of the paper is to understand people’s perception of ethics in the context of a modern incarnation of the classic trolley problem, which would potentially be encountered by autonomous vehicles in an accident. To this end, the authors have built a website, Moral Machine, and have used it to collect 40 million pairwise comparisons (would you save group A or group B?) from visitors. The paper presents an analysis of these data.
Nominator: Ariel Procaccia (CMU).
Why? First, the paper positions the trolley problem as a central challenge for AI ethics. Second, the Moral Machine experiment made AI researchers realize the vast public interest and willingness to contribute preference data for compelling problems, and this has motivated several similar, ongoing projects. Third, the experiment has inspired multiple efforts to automate ethical decisions using pairwise comparisons collected from people. (Note: text revised slightly on 1/13/2019 to more precisely reflect submitted nomination.)

Paper: Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
In: Conference on Advances in Neural Information Processing Systems 2018
Authors: H. Heidari, C. Ferrari, K. P. Gummadi, A. Krause (ETH Zurich)
Main contribution: This paper investigates the Rawlsian notion of social welfare from economics, and shows how this can be cast into a computational framework. Rather than focusing solely on (equality of) benefit, this paper proposes an approach to automated decision that takes into account risk and welfare considerations. It is based on a well-studied notion of social justice, referred to as the “Veil of Ignorance.”
Nominator: Lise Getoor (UC Santa Cruz).
Why?
This paper is a fantastic example of the best work in Fairness, Accountability and Transparency (FAT*) machine learning, building upon ideas from ethics, policy and computer science, and makes new contributions to each field. Their approach highlights tradeoffs in existing fairness for machine learning definitions, and leads to a computationally feasible mechanism for bounding individual-level inequality.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Emma Brunskill

Emma Brunskill is an Assistant CS Professor at Stanford where she directs the AI for Human Impact Lab. @aiforhi https://cs.stanford.edu/people/ebrun/