There is no scientific equation to determine what is hatred, but a Facebook picture of a smiling Anne Frank surrounded by the caption, “What’s that burning? Oh it’s my family” is an easy one. So is a Facebook picture of a baby on a scale emblazoned with a Jewish Star, where the bottom of the scale is a meat grinder with raw ground meat (presumably, a baby’s) oozing out.
Is there any doubt in your mind that those images constitute hate speech (one of the official categories for removal under Facebook’s Terms of Service) and should be removed from Facebook? That was the basis for the complaints filed by the Online Hate Prevention Institute last month.
Facebook disagreed. The pictures remain up.
The Australia-based Online Hate Prevention Institute was launched in January this year. Its mission is to help prevent, or at least control, abusive social media behavior which constitute racism or other forms of hate speech.
Dr. Andre Oboler is the chief executive officer of OHPI. Oboler has been involved in analyzing and monitoring online hate for five years. In the time that he’s been monitoring Facebook, the response time has improved, but the results have not.
“OHPI submitted documented complaints following the Facebook complaint protocol, and, true to their word, we received a response within 48 hours,” Oboler told The Jewish Press. “It’s quite amazing; the Facebook reviewers took down the images, reviewed them, and put them back up with a ‘no action’ decision within 48 hours.”
Oboler waited until the Facebook reviews were completed before posting OHPI’s findings. The methodical process and the constructive suggestions OHPI made could be held up as models of what to do when confronted with hate speech on social media, except that at this point the diligence does not appear to have paid off.
The suggestions included:
1. Remove the offensive images
2. Close the offensive pages that are posting them
3. Permanently close the accounts of the users abusing Facebook to spread such hate
4. Review which staff assessed these examples and audit their decision making
5. Take active measures to improve staff training to avoid similar poor decisions in the future
6. To institute an appeal process as part of the online reporting system
7. To institute systematic random checks of rejected complaints
At this point, Oboler is hopeful that if sufficient attention is generated, Facebook will feel compelled to re-examine their procedures. What he would like is for there to be a “systematic change to prevent online-generated harm in the future.”
One way to generate that attention, Oboler suggested, is for Facebook users who think the images described above are offensive to go to the Facebook OPHI site and “Like” it. Another is to sign the OPHI petition urging Facebook to stop allowing hate speech on its site.
OHPI is also critical of the way in which Facebook has chosen to respond to complaints about offensive Facebook Pages. Its standard response to pages that are entirely devoted to offensive material is to insert the bracketed phrase: [Controversial Humor] before the rest of the page title. That phrase acts kind of like the warning label posted on cigarette packages. The page remains vile, just as the cigarettes remain carcinogenic, but by slapping on the Controversial Humor disclaimer, it appears Facebook is seeking immunity from liability. Or at least from responsibility.
OPHI discovered this Facebook method when it was engaged in an effort to eradicate hate-filled Facebook Pages dedicated to brutalizing Aborigines. Remember – OPHI is based in Australia. After engaging in some promising responses to OPHI’s complaints, Facebook ultimately responded that “While we do not remove this type of content from the site entirely unless it violates our Statement of Rights and Responsibilities, out of respect for local laws, we have restricted access to this content in Australia via Facebook.”
But that just doesn’t make any sense, according to Oboler. As he pointed out, “Facebook’s ‘Statement of Rights and Responsibilities’ says at 3.7 ‘You will not post content that: is hate speech’. We find it very hard to understand how Facebook can look at this material and decide it is not hate speech. Ultimately, this is where Facebook is going wrong.”
Is there anything Facebook has determined to be sufficiently offensive that it will be removed? Yes, but not much.
Oboler explained that thus far the only hate speech kind of content that has been permanently removed by Facebook is when it is directed against an individual, rather than at an entire race or religion. In other words, the same problem that hate speech codes on campuses have encountered, plagues complainants hoping for a non-offensive inline community. Unless the nastiness is directed at a specific person, the default Facebook position is to not remove it.
But really, is it possible for anyone to consider the words accompanying the Anne Frank picture anything but impermissible hate speech? Facebook apparently does and will continue to do so unless enough people tell them they are wrong.