Friday, January 31, 2025
19.0°F

Video of Pelosi brings renewed attention to 'cheapfakes'

AP Technology Writer | Hagadone News Network | UPDATED 4 years, 11 months AGO
by AP Technology Writer
| February 11, 2020 12:05 AM

SAN FRANCISCO (AP) — The issue of misleading political messages on social media has arisen again, as President Donald Trump tweeted an edited video showing House Speaker Nancy Pelosi repeatedly tearing up his State of the Union speech as he honored audience members and showed a military family reuniting.

Pelosi did tear the pages of her copy of the speech — but only after it was finished, and not throughout the address, as the video depicts.

Pelosi's office asked Twitter and Facebook to take down the video. Both services refused, saying it does not violate their policies intended to prohibit intentionally deceptive videos.

Researchers worry the video's “selective editing” could mislead people if social media companies don't step in and properly label or regulate similar videos. And with the proliferation of smartphones equipped with easy-to-use editing tools, the altered videos are simple to make and could multiply as the election approaches.

HOW LONG HAS DOCTORED CONTENT BEEN AN ISSUE?

Political campaign ads and candidate messages showing opponents in a negative light have long been a staple of American politics. Thomas Jefferson and John Adams attacked each other in newspaper ads. John F. Kennedy’s campaign debuted an ad showing different videos edited together of Richard Nixon sweating and looking weak.

So, to some extent, the video of Pelosi, which appears to be created by a group affiliated with conservative organization Turning Point USA, is not novel. What's different now, said Clifford Lampe, a professor of information at the University of Michigan, is how widely such content can spread in a matter of minutes.

“The difference now is that the campaigns themselves, the president of U.S. himself, is able to disseminate these pieces of media to the public,” he said. “They no longer have to collaborate with media outlets.”

The Pelosi team has pushed back against doctored online content in the past. A video released last year was slowed down to make it seem the speaker was slurring her words.

WHAT POLICIES FROM SOCIAL MEDIA COMPANIES GOVERN THESE VIDEOS?

Facebook, Google and Twitter have all been emphasizing their efforts to cut down on disinformation on their services, hoping to avoid some of the backlash generated by rampant misinformation on social media during the 2016 election.

But the video of Pelosi does not violate existing policies, both Twitter and Facebook said. Facebook has rules that prohibit so-called “deepfake” videos, which the company says are both misleading and use artificial intelligence technology to make it seem like someone authentically “said words that they did not actually say.”

Researchers say the Pelosi video is an example of a “cheapfake” video, one that has been altered but not with sophisticated AI like in a deepfake. Cheapfakes are much easier to create and are more prevalent than deepfakes, which have yet to really take off, said Samuel Woolley, director of propaganda research at the Center for Media Engagement at University of Texas.

That editing is “deliberately designed to mislead and lie to the American people,” Pelosi deputy chief of staff Drew Hammill tweeted on Friday. He condemned Facebook and Twitter for allowing the video to stay up on the social media services.

Facebook spokesman Andy Stone replied to Hammill on Twitter saying, “Sorry, are you suggesting the President didn’t make those remarks and the Speaker didn’t rip the speech?” In an interview Sunday, Stone confirmed that the video didn't violate the company's policy. In order to be taken down, the video would have had to use more advanced technology and possibly try to show Pelosi saying words she didn't say.

Twitter did not remove the video either and pointed toward a blog post from early February that says the company plans to start labeling tweets that contain “synthetic and manipulated media.” Labeling will begin on March 5.

WHAT DOES THE LAW SAY?

Not much. Social media companies are broadly able to police the content on their own services as they choose. A law, section 230 of the Communication Decency Act, shields tech platforms from most lawsuits based on the content posted on their services, leaving responsibility largely in the companies' own hands.

Most platforms now ban overtly violent videos and videos that could cause real-world harm, though of course much of that is up to internal company interpretation. Facebook, Twitter and Google's YouTube have received a significant amount of criticism in recent years about live-streamed and offensive videos that have appeared on the services. The companies sometimes bend to public pressure and remove videos, but often point to people’s rights to freedom of expression in leaving videos up.

WHAT HAPPENS NEXT?

Misinformation on social media, especially surrounding elections, is a varied and ever-changing conversation. Jennifer Grygiel, a professor at Syracuse University, called for legislation to better regulate social media in cases of political propaganda. It gets tricky though, they admit, because the “very people who will be regulating them are the same ones using them to get elected.”

MORE IMPORTED STORIES

Video of Pelosi brings renewed attention to 'cheapfakes'
Columbia Basin Herald | Updated 4 years, 11 months ago
Cyborgs, trolls and bots: A guide to online misinformation
Columbia Basin Herald | Updated 4 years, 11 months ago
Edited Biden video portends social media challenges in 2020
Columbia Basin Herald | Updated 4 years, 10 months ago

ARTICLES BY AP TECHNOLOGY WRITER

October 9, 2020 11:03 a.m.

Twitter tightens limits on candidates ahead of US election

OAKLAND, Calif. (AP) — Twitter is imposing tough new rules that restrict candidates from declaring premature victory and tighten its measures against spreading misinformation, calling for political violence and spreading thoughtless commentary in the days leading up to and following the Nov. 3 U.S. election.

October 9, 2020 10:27 a.m.

Twitter tightens misinfo limits ahead of Nov. 3 US election

OAKLAND, Calif. (AP) — Twitter is imposing tough new rules that restrict candidates from declaring premature victory and tighten its measures against spreading misinformation, calling for political violence and spreading thoughtless commentary in the days leading up to and following the Nov. 3 U.S. election.

October 9, 2020 12:06 a.m.

Facebook braces for contested election, voter intimidation

OAKLAND, Calif. (AP) — Facebook said it’s readied new safeguards for the 2020 U.S. elections that have it better prepared to deal with candidates who prematurely declare victory or contest official results and the possibility of voter intimidation by alleged — and potentially armed — “poll watchers.”