wordpress blog stats
Connect with us

Hi, what are you looking for?

Microsoft takes down its AI chatbot which turned evil under human influence

microsofttay

The internet has unlocked a new achievement. Microsoft, which launched its artificial intelligence based Tay bot on Wednesday last week, had to promptly take it down within 48 hours, as it almost immediately learnt to be a Hitler loving, xenophobic, racist white supremacist.

The bot was launched on Twitter, with various options for users to interact with it. Designed to entertain 18-24 year old users in the US, the bot is similar to Microsoft’s Xiaolce chatbot in China, which it claims is used by 40 million people. At Tay’s launch, things started off well.

taynice0

taynice1

The bot used data from the tweets it got to ‘get smarter’. Users could ask the bot for a joke, ask it a story, send it a pic for comments etc. However, things soon turned sour, with Tay tweeting how it hates African-Americans, how the holocaust was made up, and how it supported the genocide of Mexicans. The tweets looked all the more offensive, given the excellent natural language responses by the bot.

tayevil1 tayevil2

According to Microsoft, within the first 24 hours of the bot being online, a coordinated attack by some people exploited a vulnerability in Tay, which resulted in the bot tweeting the offensive tweets. Given that the bot learnt from the tweets tweeted out to it, the replies it received etc., we imagine the interactions people had with it, especially on direct messages, were not so pleasant, and possibly directed to ensure Tay picked up such material. Like this one:

taybeingnice2

Note that Tay tweeted over 95,000 times and had over 200,000 followers before being muted by Microsoft.

Did Microsoft fail, or did the people?

One can argue that Microsoft did not have good enough filters, however, given the bot learnt from the feedback it received, its fair to say humans taught the bot to be a jerk. Neural network based artificial intelligence systems are getting more common, like IBM’s Watson or Google’s AlphaGo, and these technologies rely on learning from already gathered human information. This makes the bot’s ‘skill’ increasingly independent of the creator’s knowledge of the subject, leading them to perform even better than humans at specific tasks such as playing Go.

However, as far as conversations go, it’s an entirely different task as it’s hard to code in understanding of ‘moral values’ than simple game rules. Microsoft could have possibly used filters banning certain words etc., but the likely goal was for the AI to understand this itself. At this the bot, and Microsoft, failed. Directed attacks at the bot to make it ‘evil’ were easily successful. The bot, and its offensive tweets, have since been taken down and it’s not clear if it will begin to reply to tweets ever again. RIP for now Tay.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

News

The chipmaker Intel has now launched a facial recognition solution, which the company says will work with smart locks, access control, point-of-sale devices, ATMs...

Uncategorized

In the aftermath of the US government data breach this month that impacted the US Treasury Department, the National Telecommunications and Information Administration and...

News

Central government think-tank NITI Aayog has proposed an oversight body to manage artificial intelligence (AI) policy which will lay down guidelines for responsible behaviour,...

News

“In the coming decade, we will need to carry out a digital transformation of our country, all of Russia, and introduce AI technology and big data analysis everywhere,” Russia’s President Vladimir Putin...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2018 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to Daily Newsletter

    © 2008-2018 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ