Artificial Intelligence (AI) systems, driven by machine learning, are playing an increasing role in how we make important decisions, both personal and societal. Algorithms are already involved in recommending who we should date, who can be released on bail, how much we should pay for insurance and who requires medical care. As with any broad technology, AI has huge potential benefits to society, but also can cause harm. In this webinar, Dr Finn Lattimore outlines some of the ways in which machine learning systems can fail to satisfy ethical norms, and demonstrate how the many design choices that exist when we develop such systems encode value judgments. Finn discusses these issues in the context of real world examples, with a particular focus on health.

Watch the webinar recording.

Recommended reading

  1. Confounding Variables Can Degrade Generalization Performance of Radiological Deep Learning Models, Zech et al, 2018
  2. Friends Don’t Let Friends Deploy Black-Box Models: Detecting and Preventing Bias via Transparent Modeling, Caruana et al, 2017
  3. Can AI Help Reduce Disparities in General Medical and Mental Health Care? Chen et al, 2019
  4. Why is my Classifier Discriminatory? Chen et al, 2018
  5. Dissecting Racial Bias in an Algorithm that Guides Health Decisions for 70 Million People, Obermeyer, 2017