Microsoft Shuts Down AI Chatbot After Offensive Tweets

Thursday Mar 24th 2016 by Developer.com Staff
Share:

The artificial intelligence experiment lasted less than 24 hours.

This week, Microsoft launched a chatbot powered by artificial intelligence (AI). Called Tay, the chatbot sent out automated tweets in response to other Twitter users, and it was supposed to learn to converse more like an actual person. But within 24 hours, Tay was sending out offensive messages that included racist and misogynistic slurs, advocated genocide and questioned the existence of the Holocaust.

In response to the incident, Microsoft issued a statement that said, "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

View article

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved