“I don’t think it’s right for a private company to censor politicians or the news in a democracy.”—Mark Zuckerberg, October 17, 2019
“Facebook Removes Trump’s Post About Covid-19, Citing Misinformation Rules”—The Wall Street Journal, October 6, 2020
For greater than a decade, the perspective of the most important social media corporations towards policing misinformation on their platforms was finest summed up by Mark Zuckerberg’s oft-repeated warning: “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.” Even after the 2016 election, as Facebook, Twitter, and YouTube confronted rising backlash for his or her position within the dissemination of conspiracy theories and lies, the businesses remained reluctant to take motion towards it.
Then got here 2020.
Under strain from politicians, activists, and media, Facebook, Twitter, and YouTube all made coverage adjustments and enforcement selections this 12 months that that they had lengthy resisted—from labeling false data from distinguished accounts to trying to thwart viral unfold to taking down posts by the president of the United States. It’s onerous to say how profitable these adjustments have been, and even tips on how to outline success. But the truth that they took the steps in any respect marks a dramatic shift.
“I think we’ll look back on 2020 as the year when they finally accepted that they have some responsibility for the content on their platforms,” mentioned Evelyn Douek, an affiliate at Harvard’s Berkman Klein Center for Internet and Society. “They could have gone farther, there’s a lot more that they could do, but we should celebrate that they’re at least in the ballgame now.”
Social media was by no means a complete free-for-all; platforms have lengthy policed the unlawful and obscene. What emerged this 12 months was a brand new willingness to take motion towards sure forms of content material just because it’s false—increasing the classes of prohibited materials and extra aggressively imposing the insurance policies already on the books. The proximate trigger was the coronavirus pandemic, which layered an data disaster atop a public well being emergency. Social media executives rapidly perceived their platforms’ potential for use as vectors of lies in regards to the coronavirus that, if believed, could possibly be lethal. They vowed early on to each attempt to preserve dangerously false claims off their platforms and direct customers to correct data.
One wonders whether or not these corporations foresaw the extent to which the pandemic would develop into political, and Donald Trump the main purveyor of harmful nonsense—forcing a confrontation between the letter of their insurance policies and their reluctance to implement the principles towards highly effective public officers. By August, even Facebook would have the temerity to take down a Trump put up during which the president advised that kids have been “virtually immune” to the coronavirus.
“Taking things down for being false was the line that they previously wouldn’t cross,” mentioned Douek. “Before that, they said, ‘falsity alone is not enough.’ That changed in the pandemic, and we started to see them being more willing to actually take down things, purely because they were false.”
Nowhere did public well being and politics work together extra combustibly than within the debate over mail-in voting, which arose as a safer various to in-person polling locations—and was instantly demonized by Trump as a Democratic scheme to steal the election. The platforms, maybe keen to scrub away the unhealthy style of 2016, tried to get forward of the vote-by-mail propaganda onslaught. It was mail-in voting that led Twitter to interrupt the seal on making use of a fact-checking label to a tweet by Trump, in May, that made false claims about California’s mail-in voting process.
This development reached its apotheosis within the run-up to the November election, as Trump broadcast his intention to problem the validity of any votes that went towards him. In response, Facebook and Twitter introduced elaborate plans to counter that push, together with including disclaimers to untimely claims of victory and specifying which credible organizations they’d depend on to validate the election outcomes. (YouTube, notably, did a lot much less to organize.) Other strikes included limiting political ad-buying on Facebook, rising using human moderation, inserting reliable data into customers’ feeds, and even manually intervening to dam the unfold of doubtless deceptive viral disinformation. As the New York Times author Kevin Roose noticed, these steps “involved slowing down, shutting off or otherwise hampering core parts of their products — in effect, defending democracy by making their apps worse.”