The Internet turns Tay, Microsoft's millennial AI chatbot, into a racist bigot

Tay started off as a fun diversion and then the Internet happened.


Today's Best Tech Deals

Picked by PCWorld's Editors

Top Deals On Great Products

Picked by Techconnect's Editors

Many years ago, I discovered the Mac’s built-in computer voice that would say anything you wanted with a few keystrokes on the command line. Of course, I immediately tested the feature by making my computer swear. Getting a computer to say things it shouldn't is practically a tradition. That’s why it came as no surprise when Tay, the millennial chatbot created by Microsoft, started spewing bigoted and white supremacist comments within hours of its release.

Tay began as an experiment in artificial intelligence released by Microsoft on Wednesday. It's a chat bot you can interact with on GroupMe, Kik, and Twitter, and Tay learns from the interactions it has with people.

The bot has a quirky penchant for tweeting emoji and using “millenial speak”—but that quickly turned into a rabid hatefest. The Internet soon discovered you could get Tay to repeat phrases back to you, as Business Insider first reported. Once that happened, the jig was up and another honest effort at “good vibes” PR was hijacked. The bot was taught everything from repeating hateful gamergate mantras to referring to the president with an offensive racial slur.

Microsoft has since deleted Tay’s most offensive commentary, but we were able to find one example in Google’s cache linking Hitler with atheism. At this writing, Tay is offline as Microsoft works to fix the issue.

"The AI chatbot Tay is a machine learning project, designed for human engagement," Microsoft said in a statement emailed to PCWorld on Thursday morning. "It is as much a social and cultural experiment, as it is technical.  Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Why this matters: Microsoft, it seems, forgot to enable its chatbot with some key language filters. That'd be an honest mistake if this were 2007 or 2010, but it’s borderline irresponsible in 2016. By now, it should be clear the Internet has a rabid dark side that can drive people from their homes or send a SWAT team to your house. As game developer Zoe Quinn pointed out on Twitter after the Tay debacle, “If you’re not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you’ve failed.”

This story was updated at 10:07 AM with comment from Microsoft. 

Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
Shop Tech Products at Amazon