Latest

What do we do about the “shallowfake” Nancy Pelosi video and others like it? » Nieman Journalism Lab

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

A week ago, The Washington Post reported that altered videos (“shallowfakes”) of House Speaker Nancy Pelosi — slowed down to make it look as if she were drunk and slurring her words — were spreading on social media. Rudy Giuliani, Donald Trump’s personal attorney, tweeted one of them (though he later deleted the tweet). From the Post:

One version, posted by the conservative Facebook page Politics WatchDog, had been viewed more than 2 million times by Thursday night, been shared more than 45,000 times, and garnered 23,000 comments with users calling her “drunk” and “a babbling mess.”

YouTube took the videos down. Facebook said it would downrank them, but wouldn’t remove them altogether.

“We don’t have a policy that stipulates that the information you post on Facebook must be true,” Facebook said in a statement to The Washington Post.

The company said it instead would “heavily reduce” the video’s appearances in people’s news feeds, append a small informational box alongside the video linking to the two fact-check sites, and open a pop-up box linking to “additional reporting” whenever someone clicks to share the video.

Monika Bikert, Facebook’s head of product policy and counterterrorism, told CNN’s Anderson Cooper that Facebook’s policy is that “people make their own informed choice about what to believe. Our job is to make sure we’re getting them accurate information.” She claimed that “anybody who is seeing this video in their news feed, anybody who is going to share it to somebody else, anybody who has shared it in the past, they are being alerted that this video is false.”

“You’re making money by being in the news business,” Cooper said. “If you can’t do it well, shouldn’t you just get out of the news business?”

In the days that have followed, more people have criticized Facebook for not removing the video entirely. “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election,” Pelosi herself told KQED News this week. (That’s kind of a stretch.) Hillary Clinton described the video as “sexist trash,” adding, “Let’s send a message to Facebook that those who are in Facebook’s communities would really like Facebook to pay attention to false and doctored videos before we are flooded with them over the next months.”

“This latest doctored video proves that Facebook as we knew it is over,” Kara Swisher (who pressed Mark Zuckerberg last year on Facebook’s decision to downrank rather than remove Holocaust-denier posts) wrote in a New York Times op-ed:

The only thing the incident shows is how expert Facebook has become at blurring the lines between simple mistakes and deliberate deception, thereby abrogating its responsibility as the key distributor of news on the planet.

Would a broadcast network air this? Never. Would a newspaper publish it? Not without serious repercussions. Would a marketing campaign like this ever pass muster? False advertising.

No other media could get away with spreading anything like this because they lack the immunity protection that Facebook and other tech companies enjoy under Section 230 of the Communications Decency Act. Section 230 was intended to spur innovation and encourage start-ups. Now it’s a shield to protect behemoths from any sensible rules.

Ian Bogort in The Atlantic:

Cooper, the journalist, can’t understand why Bickert won’t see the video as propaganda posing as journalism. Bickert, the social-media executive, can’t grasp why Cooper is able to see the video only as journalism or propaganda, and not just as content, that great gray slurry that subsumes all other meaning…

When Facebook says it’s not a news company, it doesn’t just mean that it doesn’t want to fall under the legal and moral responsibilities of a news publisher. It also means that it doesn’t care about journalism in the way that newsmakers (and hopefully citizens) do, and that it doesn’t carry out its activities with the same goals in mind.

But there’s also an argument to be made that the matter isn’t that clearcut. Facebook’s statement, “We don’t have a policy that stipulates that the information you post on Facebook must be true,” may sound nuts until you remember that a blanket policy against untrue information would also affect stuff like Onion articles. “Some argue that the matter isn’t that clear-cut. Angela Chen in MIT Technology Review:

The video makes Pelosi look strange, but it’s not a “deepfake” — it doesn’t falsely show her saying anything she didn’t actually say. It’s an example of what MIT Media Lab researcher Hossein Derakhshan calls “malinformation,” or basically true information that has been subtly manipulated to harm someone.

And while Derakhshan thinks it might have been okay to take down a faked video that showed Pelosi making a racial slur, for example, he argues that it’s too much to expect platforms to police malinformation. “Usually when people do these manipulations, I don’t think it’s illegal,” he says. “So there’s no ground for removing them or requesting them to be taken off these platforms.”

homework assignment: draft the rule that prohibits doctored pelosi video but protects satire, political speech, dissent, humor etc. not so easy is it? https://t.co/zaA7kQf83i

— David Kaye (@davidakaye) May 25, 2019

Casey Newton suggests a new set of standards:

A policy that enables the maximum amount of political speech, save for a small number of exemptions outlined in a publicly posted document, has a logical coherence that “take down stuff I don’t like” does not.

That’s one reason why the take-it-down brigade might consider developing an alternate set of Facebook community standards for public consideration. I have no doubt that there are better ways to draw the boundaries here — to swiftly purge malicious propaganda, while promoting what is plainly art. But someone has to draw those boundaries, and defend them.

Alternatively, you could break Facebook up into its constituent parts, and let the resulting Baby Books experiment with standards of their own. Perhaps WhatsApp, stripped of all viral forwarding mechanics, would find a slowed-down Pelosi video acceptable when shared from one friend to another. Meanwhile Instagram would rapidly detect the video’s surging popularity and ensure that nothing like it appeared on the app’s Explore page, where the company could unwittingly aid in its distribution the way Facebook’s News Feed algorithm did this time around. Making communities smaller can make it easier to craft rules that fit them.

4 thoughts on what Facebook could have done on the Pelosi video that fall short of taking it down but might still help in the future.

(My takedown-hot take is available at room temperature here: https://t.co/iZaPHiuhn6)

— Alexios (@Mantzarlis) May 26, 2019

1/ FB could have acted faster: I’ve made the case before for an internal rapid response team that tackles content that hits a threshold of velocity, reach and harm if fact-checking partners aren’t on it yet. (That requires recognizing News Feed is an editorial product)

— Alexios (@Mantzarlis) May 26, 2019

2/ Its warning signs for passive viewers of the video could be more prominent. This is still not an obviously flagged video. (Though the warning to folks trying to share it are prominent and clear) pic.twitter.com/ctvVCJpe4t

— Alexios (@Mantzarlis) May 26, 2019

3/ FB could be more muscular with its “ex post” notifications. Folks who liked, shared, commented and viewed the video are in theory getting a notification saying “there’s more reporting”.

What if FB also gave them an easy cue to unlike, unshare or uncomment?

— Alexios (@Mantzarlis) May 26, 2019

4/ Facebook could share data with researchers and the public. How many people gave up sharing the video because of the post? That single piece of information alone could tell us more about real life attitudes to fact-checking than 10 lab studies.

— Alexios (@Mantzarlis) May 26, 2019

Besides the satire/commentary issue, there’s also the fact that slowing down video can be used to explain what happened as well as distort it (e.g. when replaying tackles in soccer). It doesn’t mean you can’t distinguish between the two, but …

— Alexios (@Mantzarlis) May 30, 2019

… any takedown policy would have to hinge on either intent (why the publisher shared it) or reception (how the audience reacted). The first is volatile (the Pelosi video was posted with neutral language) the second is gameable (comment sections can be swarmed).

— Alexios (@Mantzarlis) May 30, 2019

And the Times’ Farhad Manjoo says that the Pelosi video isn’t what we really need to be worrying about.

Whatever Facebook decides to do with this weird little video is a big meh, because if you were to rank the monsters of misinformation that American society now faces, amateurishly doctored viral videos would clock in as mere houseflies in our midst. Worry about them, sure, but not at the risk of overlooking a more clear and present danger, the million-pound, forked-tongue colossus that dominates our misinformation menagerie: Fox News and the far-flung, cross-platform lie machine that it commands.

The International Communication Association’s 69th annual conference took place in Washington, DC this week. Here are some tweets with papers and research to follow up on (let’s call this my to-do list):

Back home from a great #ICA19 where @joelleswart and I presented our paper on why people read political news they do NOT trust and do not consume news they consider trustworthy. The Trust Gap: Young People’s Tactics for Defining the Reliability of Political News @univgroningen pic.twitter.com/JwDZ96QUn1

— Marcel Broersma (@MJBroersma) May 30, 2019

Timely & OpenAccess! For those who couldn’t get enough of the @AnnaBrosius show at #ica19 and want to read more on the EU in light of the elections 🇪🇺: New paper on the role of emotions in social media campaigns – with @claesdevreese and @ERC_Research. https://t.co/rJSf3AqMEW

— Franziska Marquart (@FranziMarquart) May 29, 2019

Deeply disturbing finding. The ‘alt-right’ directly copies older neo-Nazi propaganda. Critical work by Patrick Davison, Rebecca Lewis, and @Alicetiara, building on @JessieNYC. #ICA19 pic.twitter.com/Y93EPcBBV4

— Johan Farkas (@farkasjohan) May 28, 2019

Really compelling paper by @Hameleers_M et al on visual disinformation #ica19. There’s been a real shift to studying visual communication in the field. pic.twitter.com/F1WbXhTHeU

— Daniel Kreiss (@kreissdaniel) May 28, 2019

[email protected] presents her cross-national study (US, UK, DE, AT) on debunking disinformation & source transparency in fact-checking sites, and discusses professionalism, the role of contextual factors and effects on credibilty perception #ica19 #ica_jsd pic.twitter.com/JbXaUOuV8b

— Anna Staender (@astaender) May 27, 2019

“Black Americans have been disproportionally targeted by disinformation campaigns. We need to figure out, why that is!” – Very important call from @dfreelon. “#ica19 pic.twitter.com/UChgIiMqUi

— Johan Farkas (@farkasjohan) May 27, 2019

Study from @kesearles suggests that those who access news on Facebook pay closer attention and are more able to identify disinformation when accessing the platform via desktop rather than mobile, and when user interface is visually optimized. Lots of great work at #ica19 day 2! pic.twitter.com/JpaPqDkCLB

— Media and Democracy (@SSRCmdn) May 26, 2019

Why is the U.S. so susceptible to #disinformation & #fakenews? Crucial insights from Steve Livingston @davekarpf @VWPickard @paufder & more. Wishing every American could be in this audience. #ICA19

— Lauren Potts (@__laurenbrooke) May 25, 2019

Enjoyed hearing about this study by @toledobastos and @farkasjohan on the black, grey and white propaganda accounts run by #Russia’s Internet Research Agency on social media#ica19
https://t.co/mXg38m0lDL https://t.co/ps7CXUrxrn

— Joanna Szostek (@Joanna_Szostek) May 28, 2019

@lpolgreen in Closing Plenary: “the people who seek out unbiased news the least are the ones who need it the most.” #ica19

— International Communication Association (@icahdq) May 28, 2019

“All research suggests old white men are those sharing the most misinformation online, meanwhile everybody is ‘we are really worried about the kids’”, says @cward1e #ICA19 https://t.co/BzjpfqxRVL

— Rasmus Kleis Nielsen (@rasmus_kleis) May 28, 2019

Wired’s Maryn McKenna takes a look:

Researchers at the CDC estimated that handling 107 cases of measles that occurred in 2011 cost state and local health departments between $2.7 million and $5.3 million. In 2014, 42 people came down with the disease after passing through Disneyland at the same time as a never-identified person with measles — and subsequently infected 90 additional people in California, 14 more in other states, and a further 159 people in Canada. The cost of controlling the outbreak, just in California, totaled almost $4 million. And in 2017, a five-month outbreak of measles in Minnesota infected 79 people and cost the state $2.3 million.

The funding to support that work isn’t being conjured out of the air. It’s coming from the budgets of public agencies, which have already been facing years of cuts and have no secret stashes of discretionary money to spend.

“There are substantial public health responses that go into mitigating an outbreak, and we should pursue those, because they prevent larger outbreaks or broader social disruption,” says Saad Omer, a physician and epidemiologist at Emory University and the senior author of a recent paper on the “true cost” of measles outbreaks. “But it does result in a lot of costs that can be pretty substantial. And we don’t measure the further indirect costs to the community.”