Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Hermit-The-Prog

(33,430 posts)
Mon Feb 12, 2018, 03:52 AM Feb 2018

Crisis of Misinformation

He Predicted The 2016 Fake News Crisis. Now He's Worried About An Information Apocalypse.

“What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?" technologist Aviv Ovadya warns.

In mid-2016, Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse.”

[...]

And much in the way that foreign-sponsored, targeted misinformation campaigns didn't feel like a plausible near-term threat until we realized that it was already happening, Ovadya cautions that fast-developing tools powered by artificial intelligence, machine learning, and augmented reality tech could be hijacked and used by bad actors to imitate humans and wage an information war.

And we’re closer than one might think to a potential “Infocalypse.” Already available tools for audio and video manipulation have begun to look like a potential fake news Manhattan Project. In the murky corners of the internet, people have begun using machine learning algorithms and open-source software to easily create pornographic videos that realistically superimpose the faces of celebrities — or anyone for that matter — on the adult actors’ bodies. At institutions like Stanford, technologists have built programs that that combine and mix recorded video footage with real-time face tracking to manipulate video. Similarly, at the University of Washington computer scientists successfully built a program capable of “turning audio clips into a realistic, lip-synced video of the person speaking those words.” As proof of concept, both the teams manipulated broadcast video to make world leaders appear to say things they never actually said.

[...]

“You don't need to create the fake video for this tech to have a serious impact. You just point to the fact that the tech exists and you can impugn the integrity of the stuff that’s real.”

[...]

Charlie Warzel
https://www.buzzfeed.com/charliewarzel/the-terrifying-future-of-fake-news?utm_term=.lswbzwD6Mg#.gfLa65edNg
9 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

dewsgirl

(14,961 posts)
1. Well, this more than a bit terrifying. I'm sitting here
Mon Feb 12, 2018, 04:25 AM
Feb 2018

Thinking of the chaos they will cause. I couldn't sleep before, definitely won't now.

cbreezen

(694 posts)
2. I see so many posts like this...
Mon Feb 12, 2018, 04:27 AM
Feb 2018

I see it in real life.

I feel better knowing that I am not the only person who has noticed.

yankeepants

(1,979 posts)
3. And how about this paranoia...
Mon Feb 12, 2018, 06:54 AM
Feb 2018

This "information" is surfacing in time for the pending "peepee tapes" and whatever else the FBI has on President Shithead. It will all be spun into technoblackmail.

I may be losing my mind.

Nitram

(22,890 posts)
4. The biggest danger of this is that no one will accept authentic video as true.
Mon Feb 12, 2018, 09:31 AM
Feb 2018

We literally won't know what to believe. One wonders if even this article is just a ploy to make us disbelieve that the actual tape of Trump in the hotel room with the peeing prostitutes is fake in anticipation of its release...

Hermit-The-Prog

(33,430 posts)
5. skepticism
Mon Feb 12, 2018, 10:38 AM
Feb 2018

Healthy skepticism helps root out some fakery. People will get taken in -- see, e.g., the 1938 radio broadcast of 'War of the Worlds'. There were large numbers of people who believed that broadcast just because they trusted radio.

So-called photoshopping immunized many people to altered images, but there were and are many who think anything labelled "news" is accurate. Fake news has immunized some. There are fewer people now willing to trust anything they see on the Internet than there were in 2016.

There was a time when a recording -- audio or video -- could be verified by way of the media on which it was recorded. Editing the recording would leave some traces of that editing. This is not so for digital recordings. We've been in a condition of increased difficulty in verifying recordings ever since digital replaced analog.

Watch some movies and you'll see how the mimicking of reality by manufactured sounds and images has progressed. That progression is accelerating.

This is not about the veracity of the as yet hypothetical "pee tape". It's about trying to figure out how to distinguish actual recordings from manufactured ones.

Some events can be verified by multiple trusted sources. You could also verify them by a multitude of unknown, untrusted sources, such as some event captured by many smartphones in a crowd.

If it's too good or too bad to be true, then doubt it until proven.

Nitram

(22,890 posts)
6. Points well taken, Hermit, but I am already quite skilled in triangulating information and sources
Mon Feb 12, 2018, 11:59 AM
Feb 2018

Last edited Mon Feb 12, 2018, 03:11 PM - Edit history (2)

to reach a best guess as to veracity. The problem isn't me, it is the millions of Americans who don't have a clue how to do that and default to believing what they already believe. This will further complicate the battle over what is the truth. Just look at the battle of the memos that's going on right now. All you have to do in many cases is sow some doubt and you've accomplished your objective if you are a Publican.

Hermit-The-Prog

(33,430 posts)
7. guessed that
Mon Feb 12, 2018, 12:41 PM
Feb 2018

I didn't mean to imply you were easily duped and needed to be schooled. On the contrary, your previous post cut to the core of the matter -- the creation of doubt for authentic video and questioning motives for articles posted as news. I was just expanding on those.

Yours was the second mention of the "pee tape" so I thought it appropriate to mention that, too.

No offense intended.

There will be FUD (Fear, Uncertainty, Doubt) spread all over the place as mid-terms approach. I believe, however, there's already enough verified dirt to bury the GOPers in fair elections!

SWBTATTReg

(22,166 posts)
8. I suspect that by now, a lot of people now have a healthy respect and wary eye in ...
Mon Feb 12, 2018, 01:59 PM
Feb 2018

reading and dissecting any rather unbelievable news or tidbit of information on someone now in today's society.

Kind of like when there were 1000s of newspapers around at one time, and all of them were trying to sell copies of their paper, led to many outlandish claims being printed.

I suspect that the 'internet' and all of its various news outlets are undergoing a similar process today, especially in view of the 2016 elections, weeding out bad actors, etc.

When trying to encourage the government to stop the flood of bad actors posting negative or bad news about election candidates, I suspect that the government is failing miserably and will continue to fail miserably, after all, they can't meet the pay scales of these rouge actors being hired by Russia and the like, to propagate these false rumors etc.

Besides ourselves in policing content, and so forth, Facebook and all others need to step up too, and be more aggressive in addressing this too (they are somewhat are, but not enough). If they complain about interfering in 1st Amendment rights, they shouldn't, because these misleading posts and emails, tweets and the like are akin to screaming 'FIRE' in a theater and thus should be deleted. More 'bots' need to be developed from 'preventing other bots posting fake and misleading news'. If there is technology available to develop the bots that are causing so many problems in today's internet, then bots certainly can be developed as a counter.

Latest Discussions»Issue Forums»Editorials & Other Articles»Crisis of Misinformation