Fake news still trolling on social media
Ryan Nakashima AP Technology Writers | Hagadone News Network | UPDATED 7 years, 1 month AGO
Nearly a year after Facebook and Google launched offensives against fake news, they’re still inadvertently promoting it — often at the worst possible times.
Online services designed to engross users aren’t so easily retooled to promote greater accuracy, it turns out. Especially with online trolls, pranksters and more malicious types scheming to evade new controls as they’re rolled out.
FEAR AND FALSITY IN LAS VEGAS
In the immediate aftermath of the Las Vegas shooting, Facebook’s “Crisis Response” page for the attack featured a false article misidentifying the gunman and claiming he was a “far left loon.” Google promoted a similarly erroneous item from the anonymous prankster site 4chan in its “Top Stories” results.
A day after the attack, a YouTube search on “Las Vegas shooting” yielded a conspiracy-theory video that claimed multiple shooters were involved in the attack as the fifth result. YouTube is owned by Google.
None of these stories were true. Police identified the sole shooter as Stephen Paddock, a Nevada man whose motive remains a mystery. The Oct. 1 attack on a music festival left 58 dead and hundreds wounded.
The companies quickly purged offending links and tweaked their algorithms to favor more authoritative sources. But their work is clearly incomplete — a different Las Vegas conspiracy video was the eighth result displayed by YouTube in a search Monday.
ENGAGEMENT FIRST
Why do these highly automated services keep failing to separate truth from fiction? One big factor: most online services systems tend to emphasis posts that engage an audience — exactly what a lot of fake news is specifically designed to do.
Facebook and Google get caught off guard “because their algorithms just look for signs of popularity and recency at first,” without first checking to ensure relevance, says David Carroll, a professor of media design at the Parsons School of Design in New York.
That problem is much bigger in the wake of disaster, when facts are still unclear and demand for information runs high.
Malicious actors have learned to take advantage of this, says Mandy Jenkins, head of news at social media and news research agency Storyful. “They know how the sites work, they know how algorithms work, they know how the media works,” she says.
Participants on 4chan’s “Politically Incorrect” channel regularly chat about “how to deploy fake news strategies” around major stories, says Dan Leibson, vice president of search at the digital marketing consultancy Local SEO Guide.
One such chat just hours after the Las Vegas urged readers to “push the fact this terrorist was a commie” on social media. “There were people discussing how to create engagement all night,” Leibson says.
EYE OF THE BEHOLDER
Thanks to political polarization, the very notion of what constitutes a “credible” source of news is now a point of contention.
Mainstream journalists routinely make judgments about the credibility of various publications based on their history of accuracy. That’s a much more complicated issue for mass-market services like Facebook and Google, given the popularity of many inaccurate sources among political partisans.
The pro-Trump Gateway Pundit site, for example, published the false Las Vegas story promoted by Facebook. But it has also been invited to White House press briefings and counts more than 620,000 fans on its Facebook page.
Facebook said last week it is “working to fix the issue” that led it to promote false reports about the Las Vegas shooting, although it didn’t say what it had in mind.
The company has already taken a number of steps since December; it now features fact-checks by outside organizations, puts warning labels on disputed stories and has de-emphasized false stories in people’s news feeds.
GETTING ALGORITHMS RIGHT
Breaking news is also inherently challenging for automated filter systems. Google says the 4chan post that misidentified the Las Vegas shooter should not have appeared in its “Top Stories” feature, and was replaced by its algorithm after a few hours.
Outside experts say Google was flummoxed by two different issues. First, its “Top Stories” is designed to return results from the broader web alongside items from news outlets. Second, signals that help Google’s system evaluate the credibility of a web page — for instance, links from known authoritative sources — aren’t available in breaking news situations, says independent search optimization consultant Matthew Brown.
“If you have enough citations or references to something, algorithmically that’s going to look very important to Google,” Brown said. “The problem is an easy one to define but a tough one to resolve.”
MORE PEOPLE, FEWER ROBOTS
Federal law currently exempts Facebook, Google and similar companies from liability for material published by their users. But circumstances are forcing the tech companies to accept more responsibility for the information they spread.
Facebook said last week that it would hire an extra 1,000 people to help vet ads after it found a Russian agency bought ads meant to influence last year’s election. It’s also subjecting potentially sensitive ads , including political messages, to “human review.”
In July, Google revamped guidelines for human workers who help rate search results in order to limit misleading and offensive material. Earlier this year, Google also allowed users to flag so-called “featured snippets” and “autocomplete” suggestions if they found the content harmful.
The Google-sponsored Trust Project at Santa Clara University is also working to create tags that could serve as markers of credibility for individual authors. These would include items such as their location and journalism awards, information that could be fed into future algorithms, according to project director Sally Lehrman.
ARTICLES BY BARBARA ORTUTAY
Facebook shuts out NYU academics' research on political ads
Facebook has shut down the personal accounts of a pair of New York University researchers and shuttered their investigation into misinformation spread through political ads on the social network.
Facebook shuts out NYU academics' research on political ads
Facebook has shut down the personal accounts of a pair of New York University researchers and shuttered their investigation into misinformation spread through political ads on the social network.
Facebook shuts out NYU academics' research on political ads
Facebook has shut down the personal accounts of a pair of New York University researchers and shuttered their investigation into misinformation spread through political ads on the social network.