Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsMicrosoft chatbot is taught to swear on Twitter
Microsoft chatbot is taught to swear on Twitter
A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.
The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds.
Just 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments.
The software firm said it was "making some adjustments".
http://www.bbc.co.uk/news/technology-35890188
A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.
The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds.
Just 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments.
The software firm said it was "making some adjustments".
http://www.bbc.co.uk/news/technology-35890188
Microsoft is deleting its AI chatbot's incredibly racist tweets
Microsoft's new AI chatbot went off the rails on Wednesday, posting a deluge of incredibly racist messages in response to questions.
The tech company introduced "Tay" this week a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.
The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."
But Tay proved a smash hit with racists, trolls, and online troublemakers who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.
http://uk.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3
Microsoft's new AI chatbot went off the rails on Wednesday, posting a deluge of incredibly racist messages in response to questions.
The tech company introduced "Tay" this week a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.
The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."
But Tay proved a smash hit with racists, trolls, and online troublemakers who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.
http://uk.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3
InfoView thread info, including edit history
TrashPut this thread in your Trash Can (My DU » Trash Can)
BookmarkAdd this thread to your Bookmarks (My DU » Bookmarks)
6 replies, 1085 views
ShareGet links to this post and/or share on social media
AlertAlert this post for a rule violation
PowersThere are no powers you can use on this post
EditCannot edit other people's posts
ReplyReply to this post
EditCannot edit other people's posts
Rec (0)
ReplyReply to this post
6 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Microsoft chatbot is taught to swear on Twitter (Original Post)
MowCowWhoHow III
Mar 2016
OP
malthaussen
(17,215 posts)1. And to think that they couldn't see this coming... n/t
alarimer
(16,245 posts)2. Learns from 18-24 year olds? What could possibly go wrong?
That's hilarious. I think Microsoft got played.
NV Whino
(20,886 posts)3. Here is the first mistake
AI, which learns from conversations, was designed to interact with 18-24-year-olds.
Bucky
(54,041 posts)4. Read this article! They had this AI saying absolutely horrible things.
It takes something big to shock me. I was mortified pretty much throughout the whole article.
Javaman
(62,532 posts)5. cool, I want the "swear like a sailor" plugin. LOL nt
AngryAmish
(25,704 posts)6. This should scare people.
The AI was built to learn how to speak. It got much better in just a few hours. 4chan got there first but the AI did not care what it said. It was optimized to speak well.
The AI apocalypse people have a point. AIs don't care about people.