News

The Problem of Free Speech in an Age of Disinformation

Kate Starbird, a professor of human-computer interaction at the University of Washington who tracks social media disinformation, called Facebook’s label “worse than nothing.” Adding a weak label to a Trump post mostly has the effect of “giving it an attention bump by creating a second news cycle about Republican charges of bias in content moderation,” says Nathaniel Persily, a Stanford law professor and co-director of the university’s Program on Democracy and the Internet.

Facebook has since updated its labels, based on tests and feedback, including from civil rights leaders. “The labels we have now, we have far more than we used to,” says Monika Bickert, Facebook’s vice president for content policy. “They’ve gotten stronger. But I would expect we’ll continue to refine them as we keep seeing what’s working.” Facebook updated the label on Trump’s Sept. 28 tweet to “Both voting in person and voting by mail have a long history of trustworthiness in the US and the same is predicted this year. Source: Bipartisan Policy Center.” On an Oct. 6 Trump post with more falsehoods about voting, Facebook added an additional sentence to that label: “Voter fraud is extremely rare across voting methods.” (Other labels, though, remain mild, and plenty of misleading content related to voting remains unlabeled.)

Angelo Carusone, the president of Media Matters for America, a nonprofit media watchdog group, finds the changes useful but frustratingly late. “We went from them refusing to touch any of the content, an entire ocean of disinformation on voting and election integrity, and dismissal of any efforts to address that — to this. They let it metastasize, and now they start doing the thing they could have done all along.” Carusone also points out that independent researchers don’t have access to data that would allow them to study key questions about the companies’ claims of addressing disinformation. How prevalent are disinformation and hate speech on the platforms? Are people who see Facebook, Twitter and YouTube’s information labels less likely to share false and misleading content? Which type of warning has the greatest impact?

Twitter and Facebook reduce the spread of some false posts, but during this election season, Starbird has watched false content shared or retweeted tens of thousands of times or more before companies make any visible effort to address it. “Currently, we are watching disinformation go viral & trying desperately to refute it,” she tweeted in September. “By the time we do — even in cases where platforms end up taking action — the false info/narrative has already done its damage.”

Facebook came under intense criticism for the role it played in the last presidential race. During the 2016 campaign, Facebook later reported, Russian operatives spent about $100,000 to buy some 3,000 ads meant to benefit Trump largely by sowing racial division. By choosing Facebook, a small investment had an outsize payoff as the site’s users circulated the planted ads to their followers. “Facebook’s scale means we’ve concentrated our risk,” says Brendan Nyhan, a political scientist at Dartmouth College. “When they’re wrong, they’re wrong on a national or global scale.”

Facebook and YouTube have treated political ads as protected speech, allowing them to include false and misleading information. Online ads — like direct mail and robocalls — can make setting the record straight very difficult. Online advertisers can use microtargeting to pinpoint the segments of users they want to reach. “Misleading TV ads can be countered and fact-checked,” while a misleading message in a microtargeted ad “remains hidden from challenge by the other campaign or the media,” Zeynep Tufekci, a sociologist at the University of North Carolina at Chapel Hill and the author of the 2017 book “Twitter and Tear Gas,” wrote in a prescient 2012 Op-Ed in The New York Times.

In this election season, domestic groups are adopting similar tactics. This summer, the Trump-aligned group FreedomWorks, which was seeded by the billionaire Koch brothers, promoted 150 Facebook ads directing people to a page with a picture of LeBron James. The image was accompanied by a quote, in which James denounced poll closures as racist, that was repurposed to deceive people into thinking he was discouraging voting by mail. After The Washington Post reported on it, Facebook removed the page for violating its voter-interference policy, but only after the ads were seen hundreds of thousands of times.

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like

More in:News

Leave a reply

Your email address will not be published. Required fields are marked *