• Jennifer Jordan

The Risky Business of AI Implementations - Unintended Consequences

Updated: Sep 28, 2019




Headlines are full of the ways in which our use of data and automated decision-making algorithms have run amok, from blindly optimizing into unintended consequences, to extrapolating for populations not represented in the data, to replicating and even amplifying existing biases:


· YouTube’s algorithm makes it easy for pedophiles to find more videos of children – June 2019

· Human Genomics Research Has A Diversity Problem – March 2019

· Women’s Pain Is Different from Men’s – The Drugs Could be Too – March 2019

· Amazon's controversial facial recognition software can't tell the difference between men and women or recognize dark-skinned females, MIT study finds – January 2019

· Police use of Amazon’s face-recognition service draws privacy warnings – May 2018

· Will Using Artificial Intelligence to Make Loans Trade One Kind of Bias for Another? – March 2017

· Amazon Doesn’t Consider the Race of Its Customers. Should it? – April 2016

· Google’s algorithm shows prestigious jobs to men, not women – July 2015

· Racist Camera! No, I did not blink. I’m just Asian! – May 2009


At the same time the push to regulate this technology is heating up. California’s Consumer Protection Act is set to go into effect, multiple cities and a few states have adopted legislation limiting the use of facial recognition technology, and the “Algorithmic Accountability Act” proposed in the U.S. Congress would require that companies be able to explain the technologies’ decisions to consumers.


https://medium.com/@jajordan13/ai-trust-transparency-a-new-crop-of-startups-emerging-to-form-an-ai-trust-transparency-stack-94e0af00acf7?source=friends_link&sk=95528b5c8f48e7959e6f620acd402606



45 views

© 2018 by PickAxes&Shovels. Proudly created with Wix.com

  • Twitter Round