It seemed like such a good idea at the time. Microsoft developed Tay, an artificial intelligence (AI) chatbot targeted at 18- to 24-year-olds, the dominant users of mobile chat in the United States. According to Microsoft,

[Tay was] developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.

Business Insider described how she worked: “Tay was developed using a learning algorithm that gets better the more data it receives. The more data Tay receives, the better its responses should get.” In teen speak, she was to be, "A.I fam from the internet that's got zero chill." But in very short order, her artificial intelligence met real stupidity, the underbelly of the Internet, and learned very quickly how to be a horribly racist and anti-semitic sexbot. On this family friendly site, we cannot even think about displaying the details, but we'll show the progression, from her first tweet at 8:14:

first tweet

...To the first lesson in anti-semitism, two and a half hours later:

antisemitic tweet

...To shut-down just after midnight — and you don’t want to know what happened in between.

last tweet

One AI expert tells Tech Insider that this should have been foreseen and could have been avoided:

AI expert Azeem Azhar told Business Insider: "There are a number of precautionary steps they [Microsft] could have taken. It wouldn't have been too hard to create a blacklist of terms; or narrow the scope of replies. They could also have simply manually moderated Tay for the first few days, even if that had meant slower responses.”… "Of course, Twitter users were going to tinker with Tay and push it to extremes. That's what users do — any product manager knows that.”

Certainly anyone who has worked on a website that accepts comments or who has looked at Reddit would know this, and would wonder what Microsoft was thinking. The Internet is full of creeps who jump into any forum that doesn’t stomp on them immediately; it's just a fact of Internet life. Tay’s mom at Microsoft should have had The Talk about what’s out there before they sent her into the big wide Internet world.

Or as the Telegraph put it more succinctly: "What were they expecting when they introduced an innocent, 'young teen girl' AI to the jokers and weirdos on Twitter?”

Screen captureMessage on Microsoft's Tay.AI website

Microsoft certainly learned this, emailing out this message:

"The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."

Some say she's being given a lobotomy, but she's probably just getting a really good talking to and being sent to her room. She certainly got a good education in a hurry.

Lloyd Alter ( @lloydalter ) writes about smart (and dumb) tech with a side of design and a dash of boomer angst.