A Microsoft executive said Friday that the company was “deeply sorry” for the “unintended offensive and hurtful” tweets the company’s Tay chatbot delivered earlier this week.
“Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,” Peter Lee, the corporate vice president in charge of Microsoft Research, wrote in a blog post.
While that echoes the message that Microsoft delivered earlier, Lee attempted to show how Tay wasn’t simply unleashed onto the Internet without preparation. Tay was the outgrowth of a similar Microsoft chatbot known as XiaoIce, which is already “delighting” 40 million people in China, Lee explained.
“The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?” Lee wrote. “Tay—a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes—is our first attempt to answer this question.”
Why this matters: Lee’s disclosure that Microsoft has already released a chatbot that 40 million Chinese people are using with civility makes the Tay debacle even more humiliating for the western world. Microsoft and Lee are clearly embarrassed, but it’s difficult to tell whether they’re ashamed of their own failure, or of the audience that abused Tay’s algorithm. Perhaps there’s a lesson here: Social constructs have to be thought of in terms of social vulnerabilities in the same way software must be constructed with security exploits in mind.

Just one of the bizarre tweets issued by the Tay chatbot from Microsoft.
Tay’s troubled past
Lee wrote that Tay had been developed with filtering built in, and had been tested with “diverse” user groups. “We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience,” Lee wrote.
Tay’s platforms included Qik and Twitter, and the latter platform became the true test for Tay’s maturity. Within 24 hours of coming online, Lee wrote that Tay had been subject to a “coordinated attack by a subset of people.”
“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” Lee wrote. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.”
Lee didn’t say how the attack worked, specifically, but many believe that by asking the Tay bot to “repeat after me,” Tay would not only parrot the phrase but also “learn” it, and incorporate it into her vocabulary.
Lee wrote that Microsoft sees Tay as a research effort, and that AI systems feed off both positive and negative interactions with people. The problem, of course, is how Microsoft will reintroduce Tay publicly, with the risk that the same vulnerability, or a different one, may be used to offend others.
“To do AI right, one needs to iterate with many people and often in public forums,” Lee wrote. “We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process.”
And right now, Microsoft doesn’t seem to have a ready answer.