Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

MowCowWhoHow III

(2,103 posts)
Thu Mar 24, 2016, 09:27 AM Mar 2016

Microsoft chatbot is taught to swear on Twitter

Microsoft chatbot is taught to swear on Twitter

A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.

The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds.

Just 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments.

The software firm said it was "making some adjustments".

http://www.bbc.co.uk/news/technology-35890188

Microsoft is deleting its AI chatbot's incredibly racist tweets

Microsoft's new AI chatbot went off the rails on Wednesday, posting a deluge of incredibly racist messages in response to questions.

The tech company introduced "Tay" this week — a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.

The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."

But Tay proved a smash hit with racists, trolls, and online troublemakers — who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

http://uk.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3
6 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Microsoft chatbot is taught to swear on Twitter (Original Post) MowCowWhoHow III Mar 2016 OP
And to think that they couldn't see this coming... n/t malthaussen Mar 2016 #1
Learns from 18-24 year olds? What could possibly go wrong? alarimer Mar 2016 #2
Here is the first mistake NV Whino Mar 2016 #3
Read this article! They had this AI saying absolutely horrible things. Bucky Mar 2016 #4
cool, I want the "swear like a sailor" plugin. LOL nt Javaman Mar 2016 #5
This should scare people. AngryAmish Mar 2016 #6

NV Whino

(20,886 posts)
3. Here is the first mistake
Thu Mar 24, 2016, 11:08 AM
Mar 2016
AI, which learns from conversations, was designed to interact with 18-24-year-olds.

Bucky

(54,041 posts)
4. Read this article! They had this AI saying absolutely horrible things.
Thu Mar 24, 2016, 12:09 PM
Mar 2016

It takes something big to shock me. I was mortified pretty much throughout the whole article.

 

AngryAmish

(25,704 posts)
6. This should scare people.
Thu Mar 24, 2016, 01:06 PM
Mar 2016

The AI was built to learn how to speak. It got much better in just a few hours. 4chan got there first but the AI did not care what it said. It was optimized to speak well.

The AI apocalypse people have a point. AIs don't care about people.

Latest Discussions»General Discussion»Microsoft chatbot is taug...