top of page
  • Writer's pictureJennifer Jordan

The Risky Business of AI Implementations - Unintended Consequences

Updated: Sep 28, 2019

Headlines are full of the ways in which our use of data and automated decision-making algorithms have run amok, from blindly optimizing into unintended consequences, to extrapolating for populations not represented in the data, to replicating and even amplifying existing biases:

· YouTube’s algorithm makes it easy for pedophiles to find more videos of children – June 2019

· Human Genomics Research Has A Diversity Problem – March 2019

· Women’s Pain Is Different from Men’s – The Drugs Could be Too – March 2019

· Amazon's controversial facial recognition software can't tell the difference between men and women or recognize dark-skinned females, MIT study finds – January 2019

· Police use of Amazon’s face-recognition service draws privacy warnings – May 2018

· Will Using Artificial Intelligence to Make Loans Trade One Kind of Bias for Another? – March 2017

· Amazon Doesn’t Consider the Race of Its Customers. Should it? – April 2016

· Google’s algorithm shows prestigious jobs to men, not women – July 2015

· Racist Camera! No, I did not blink. I’m just Asian! – May 2009

At the same time the push to regulate this technology is heating up. California’s Consumer Protection Act is set to go into effect, multiple cities and a few states have adopted legislation limiting the use of facial recognition technology, and the “Algorithmic Accountability Act” proposed in the U.S. Congress would require that companies be able to explain the technologies’ decisions to consumers.

57 views0 comments


bottom of page