This article exists as part of the online archive for HuffPost Australia, which closed in 2021.

Facebook Cracked Down On Extremism. It Only Took A Major Boycott And Multiple Killings.

Social media platforms have issued bans and takedowns as they face mounting pressure. But how much has really changed?

Facebook announced on Tuesday that it had banned hundreds of accounts and dozens of groups dedicated to the far-right “boogaloo” movement, which has been linked to multiple extremist plots and killings in recent months. Facebook touted its action as “the latest step in our commitment to ban people who proclaim a violent mission from using our platform.”

Other social media companies took similar actions against hate speech and extremism this week. Reddit announced that it banned about 2,000 of its online communities for violating its content policies, including the subreddit The_Donald, which had about 800,000 members who supported President Donald Trump.

YouTube de-platformed several far-right extremists from its service, and the streaming service Twitch temporarily suspended Trump’s account for “hateful conduct.”

But this isn’t a sea change in how these platforms address hateful content. Many experts instead see a slow, piecemeal approach by companies who have let extremism and political disinformation become pervasive ― and who are now facing strong backlash from advertisers and public criticism ahead of the 2020 election.

At least 300 advertisers pulled their ads from Facebook’s properties for July as part of a boycott demanding the company adjust its practices.

While headlines heralded a wave of tech platforms cracking down on hate speech, the recent actions were less drastic than they may seem.

YouTube’s ban targeted outright white supremacists, including former Ku Klux Klan leader David Duke.

Trump is a minor figure on Twitch, and the number of his followers pales in comparison to those of the platform’s professional gamer stars.

Meanwhile, although Reddit claimed its bans were part of a larger policy on hate, most of the subreddits it removed were inactive. The_Donald had been effectively abandoned for months after its users migrated to another platform.

“I just don’t see a whole lot different here,” said Megan Squire, a professor of computer science at Elon University who tracks online extremist groups. “Right now, I don’t think that there’s been much big structural change.”

Facebook, for example, has repeatedly made subtle attempts to address individual problems while hand-wringing over large-scale policy decisions. When faced with a growing extremism problem from the anti-government boogaloo movement, the platform initially tried to make the groups harder to find or to restrict certain terms ― a change that many groups quickly found ways of working around.

It was only after actions on Facebook were directly linked to real-world violence did the platform announce on Tuesday that it would ban certain boogaloo accounts and label it a dangerous organization.

Federal prosecutors in California last month charged two members of the movement in the shooting of two officers working for a contractor of the Federal Protective Service, one of whom died. The criminal complaint said that the two members met on Facebook and posted references to carrying out an attack.

One of the two boogaloo supporters was also charged with the killing of a sheriff’s deputy in Santa Cruz, California, and engaging in a shootout with police days later.

“We know that hate speech leads to violence, but [platforms] don’t take action until it gets to that point,” Squire said.

A closer look at Facebook’s ban on boogaloo groups also reveals that it only targets a specific network that the platform alleges is engaged in violent activity while leaving the wider movement largely untouched. The policy has already led to vague and unclear enforcement, such as removing one major public boogaloo group while leaving its private offshoot untouched.

For its part, Facebook said that it has a lengthy and rigorous process to designate groups as dangerous organizations and that it will continue to take action against those who violate its policies.

“We are committed to reviewing accounts, groups and pages, including ones currently on Facebook, against our Dangerous Individuals and Organizations policy,” a Facebook spokesperson told HuffPost.

Platforms have become increasingly sensitive to criticism and bad press over allowing hate speech on their sites in recent years, and most have updated their policies while making efforts to prevent the spread of extremist content.

“We have strict policies prohibiting hate speech on YouTube, and terminate any channel that repeatedly or egregiously violates those policies,” a YouTube spokesperson told HuffPost in a statement. YouTube said it has removed more than 25,000 channels for violating hate speech since implementing a new policy in June 2019.

But there have been ongoing enforcement issues when it comes to how platforms police themselves. YouTube, for instance, has continually struggled with preventing child sexual exploitation on its platform.

And despite YouTube’s latest ban, several other prominent white nationalists continue to have channels on the platform. Removing and assessing extremism and hate speech can ultimately end up looking like a game of whack-a-mole.

“There is always going to be somebody that they’re banning because there’s terrible people out there using these platforms and they don’t get them all at once,” Squire said. “It’s just this constant battle.”

Facebook Chairman and CEO Mark Zuckerberg testifies at a House Financial Services Committee hearing in Washington on Oct. 23, 2019.
Erin Scott / Reuters
Facebook Chairman and CEO Mark Zuckerberg testifies at a House Financial Services Committee hearing in Washington on Oct. 23, 2019.

The Trump Problem

Facebook and its chief executive, Mark Zuckerberg, have also dragged their feet for years over how to address misinformation and racist rhetoric put out by Trump and his campaign, generally carving out broad exceptions for him and other politicians.

But Facebook was forced to reevaluate its arrangement last month after Trump posted a threat of violence against anti-racism protesters: “When the looting starts, the shooting starts.” Trump’s post resulted in an internal uproar and a private phone call between Zuckerberg and the president, The Washington Post reported. The platform ultimately changed its policy to label hate speech from political leaders, although what that policy looks like in practice remains to be seen. Facebook said that Trump’s post would not have qualified for action under the new policy.

“They’re kind of on this long, slow slog of defining what they want on the platform,” Squire said. “They’re taking these tiny steps, and we keep running story after story about each of the steps.”

The most notable change for the platform in the last few weeks is not the content of Trump’s messaging or the levels of extremism on the platform but the reaction from both employees and advertisers.

More than 5,000 employees demanded that Facebook change its policies around political speech after Trump’s threat to protesters, and some staged a virtual walkout.

The advertiser boycott, “Stop Hate for Profit,” enlisted major corporations such as Unilever and Verizon. (Verizon owns Verizon Media Group, HuffPost’s parent company.) The boycott caused Facebook to lose $56 billion in market value last week as its stock plunged on Friday, although it has recovered somewhat since then.

Even companies that have been more proactive about content moderation than Facebook have ultimately taken uneven actions when it comes to addressing Trump and other far-right misinformation.

Twitter has begun putting warning labels on some tweets containing misinformation, including several of Trump’s, although the tweets are still viewable, and it’s unclear what effect such labels have on Trump’s overall messaging.

This relatively lax attempt at moderation — essentially flagging outright falsehoods and removing easily definable forms of threats or hate speech — has also generated intense conservative backlash.

Many Republicans and right-wing pundits continue to allege that there is anti-conservative bias on platforms, despite ample evidence to the contrary.

Meanwhile, Trump’s campaign has reportedly been seeking alternatives to major platforms as a means of getting around any moderation at all. It urges followers to download its campaign app and created an unofficial network of sympathetic media to pump out its message.

A number of high-profile Republicans and Trump allies also announced in recent weeks that they were joining Parler, an alternative to Twitter that describes itself as “unbiased” and pro-free speech ― despite banning several forms of speech, making users potentially liable for legal fees and retaining the power to ban users whenever it chooses. On Monday, Parler’s CEO, John Matze, responded to people complaining about being banned from the free speech platform with a list of some rules.

“You cannot threaten to kill anyone in the comment section,” he wrote.

Close
This article exists as part of the online archive for HuffPost Australia. Certain site features have been disabled. If you have questions or concerns, please check our FAQ or contact support@huffpost.com.