This article exists as part of the online archive for HuffPost Australia, which closed in 2021.

Facebook Letting Vile Anti-Islamic Hate Speech Slip Through Reporting Tools

'Kill as many off these goatf**kers as you can' initially didn't violate Facebook's Community Standards.

"Set the f**kers on fire, if they don't want to take their burka as we have requested then we just burn it of them!"

"Only way is to burn em down and run em out of town."

"Go out, and kill as many off these goatf**kers as you can!!"

"Recipe: 1 ltr petrol, 3 matches taped together, instructions: chuck petrol on goat f**ker,, ignite matches and set goat f**ker alight and say MERRY CHRISTMAS to goatf**ker Done !!!"

"Give me a gun and I'll start shooting the putrid things. I'm f**king over the Australian government and the Islamic filth. Just kill them all f**k it."

"Throw another Mozzie on the fire."

These shocking anti-Islamic comments appeared over a few days in December on an Australian-based closed Facebook group, a "supporter's group" for a well-known conservative politician. The Huffington Post Australia reported each one of them, and others, through Facebook's reporting tool. We reported them in various reporting categories including "it's harassment or hate speech" and "it's threatening, violent or suicidal".

Each and every one of those reports was returned with the same answer from Facebook: "We've reviewed the comment you reported... and found that it doesn't violate our Community Standards."

One of the comments we reported...
AOL
One of the comments we reported...
...and the response we received.
AOL
...and the response we received.

It begs the question -- if outlining explicit 'instructions' and plans on how to set fire to or shoot members of a particular demographic based solely on their religion "doesn't violate" Facebook's standards (which explicitly ban hate speech, harassment and bullying), what does?

"Facebook is still really behind the game. It has not adequately found a way to deal with really sexist and racist hate speech on its platform, but it is doing a better job than, for instance, Twitter," said Dr Emma Jane, an expert in cyberbullying with the University of NSW.

"Facebook is trying at least, but it's still not getting there. These cashed-up platforms have a duty of care to their customers in terms of having better policies. And as key components in the monetisation strategies of these companies we should demand better service."

AOL

You may have used Facebook's reporting tool in the past. If you see some content which offends you -- a rude picture, some vicious written attack, a particularly violent or sexual video -- you can click the little arrow next to the post, and click 'report post'.

The system tries to filter out why a person is making the complaint (whether the complainant simply doesn't like the content, or whether it is genuinely offensive and inappropriate) and gives options to report the post; categories include nudity or pornography, attacking people on their background or sexual orientation, advocating violence or other illegal acts.

There has been criticism of the system for allowing offensive language to flourish, but for censoring pictures of classic artwork depicting nudity. However, with a Facebook community of 1.8 billion people, policing content on the platform is admittedly incredibly difficult.

"When you think about the staggering amount of content posted each day, it's gobsmacking," Jane said. "The logistics of any kind of meaningful comment monitoring policy are really challenging."

For research purposes, we joined this closed Facebook group named after a prominent Australian conservative politician. As an experiment, over a few days in December, we reported a number of violent anti-Muslim comments. Some -- such as "Blow them all up !! Good riddance once and for all !! The filthy black scum that they are !!" -- were removed by Facebook within hours of lodging our report because they "violated our Community Standards". Others, like the ones listed at the top of this article, were not initially removed because Facebook said they were within the Community Standards.

All of these comments have disappeared from Facebook since we made our initial reports. Some were removed after Facebook took another look (many hours or days later) in a secondary review of the comments, and decided they did violate community standards. Others vanished after admins of the group deleted the comment thread. More were deleted once HuffPost Australia made media inquiries to Facebook.

AOL

HuffPost Australia understands teams of people, working 24 hours a day in shifts across the world and speaking 40 languages, assess the reports that come in. Depending on the time the report is received, it may be assessed by an employee in Australia, the U.S, Europe, or elsewhere. Some reports are reviewed again later as part of an internal Facebook audit process to double-check reports.

This secondary check is also in part because local slang can mean what is offensive to an Australian may be not offensive or simply confusing to an American, or vice versa. There is also a degree of automation involved in assessing the reports.

"Automation helps us enforce these standards. Although we primarily use these systems to assist manual enforcement by content reviewers, we are also exploring additional ways that automation could report or remove content that clearly violates our policies and has no place on Facebook," a Facebook spokesman said in a statement.

"The Community Standards aim to strike the right balance between giving people a place to connect and share openly, and promoting a welcoming and safe environment for everyone. Anyone can report content to us if they think it violates our standards. Our teams review these reports and will remove the content if there is a violation."

Prominent Muslim lawyer and writer Mariam Veiszadeh has personal experience with Facebook's reporting systems. Loudly and proudly calling out anti-Islam sentiment, and with a big social media platform, she attracts a lot of abuse from online trolls.

While she, too, often reports anti-Islam abuse and often is told it "doesn't violate our Community Standards", she was recently temporarily banned from Facebook herself after posting a screenshot of abuse she received.

"This incident [being banned] indicates that there are flaws with their reporting system that needs to be addressed and I'm certainly not the first to make such an observation," she told HuffPost Australia.

"I completely understand that no system is perfect and always prone to human error but I do feel there are opportunities for further refinement. I have observed inconsistencies how judgements are made about what is and isn't deemed in breach of their standards."

Veiszadeh called on Facebook to tighten up their reporting mechanisms.

"There is a fine line between freedom of speech and incitement of hatred and potential violence. I am not suggesting it's always easy to strike the right balance but there's clearly a need for Facebook to take heed of user concerns and feedback about its platform," she said.

"This incident proves that the platform can be easily exploited by trolls and bigots to try and silence advocates."

AOL

Despite repeated specific questions on exactly how reporting works, and why comments like "Give me a gun and I'll start shooting the putrid things" slipped through the net, Facebook declined to answer. The spokesman said it was company policy to not outline in detail how reporting and removal of content is administered, because of fears that users could 'game' or outmanoeuvre the system.

"Facebook is aware of people's dissatisfaction and has taken some steps to address the problem of cyber abuse as well as acknowledging that people feel strongly about this issue. Yet it is still notoriously closed when it comes to its models and algorithms," Jane said.

"Facebook has a poor history when it comes to responding to and addressing abuse. Facebook has suspended the accounts of women who've reposted the abuse they've received, claiming these women have violated community standards. For Facebook to think this sort of response is OK shows it is out of step with our community standards."

Jane said Facebook should be more open and transparent in how it deals with such offensive content in order to provide some reassurance for users.

"At the end of the day it's impossible to say how well or not well Facebook is doing, because the company is so secretive. Until Facebook is more transparent about the way it does business, including how it responds to complaints about objectionable material, we're just going on (anecdotal data)," she said.

"Because Facebook is so secretive about nearly everything it does, it's impossible to reach a well-reasoned conclusion about how the company is doing vis-à-vis abuse, harassment, and hate speech."

For more information, see Facebook's safety centre and community guidelines.

We reported anti-Muslim abuse to Facebook.

ALSO ON HUFFPOST AUSTRALIA

Close
This article exists as part of the online archive for HuffPost Australia. Certain site features have been disabled. If you have questions or concerns, please check our FAQ or contact support@huffpost.com.