Facebook and Google need humans, not just algorithms, to filter out hate speech

The problem needs a human touch.
The problem needs a human touch.
Image: Reuters/Navesh Chitrakar
We may earn a commission from links on this page.

Facebook and Google give advertisers the ability to target users by their specific interests. That’s what has made those companies the giants that they are. Advertisers on Facebook can target people who work for a certain company or had a particular major in college, for example, and advertisers on Google can target anyone who searches a given phrase.

But what happens when users list their field of study as “Jew hater,” or list their employer as the “Nazi Party,” or search for ”black people ruin neighborhoods?”

All of those were options Facebook and Google suggested to advertisers as interests they could target in their ad campaigns, according to recent reports by ProPublica and BuzzFeed. Both companies have now removed the offensive phrases that the news outlets uncovered, and said they’ll work to ensure their ad platforms no longer offer such suggestions.

That, however, is a tall technical order. How will either company develop a system that can filter out offensive phrases? It would be impossible for humans to manually sift through and flag all of the hateful content people enter into the websites every day, and there’s no algorithm that can detect offensive language with 100% accuracy; the technology has not yet progressed to that point. The fields of machine learning and natural language processing have made leaps and bounds in recent years, but it remains incredibly difficult for a computer to recognize whether a given phrase contains hate speech.

“It’s a pretty big technical challenge to actually have machine learning and natural language processing be able to do that kind of filtering automatically,” said William Hamilton, a PhD candidate at Stanford University, who specializes in using machine learning to analyze social systems. “The difficulty in trying to know, ‘is this hate speech?’ is that we actually need to imbue our algorithms with a lot of knowledge about history, knowledge about social context, knowledge about culture.”

A programmer can tell a computer that certain words or word combinations are offensive, but there are too many possible permutations of word combinations that amount to an offensive phrase to pre-determine them all. Machine learning allows programmers to feed hundreds or thousands of offensive phrases into computers to give them a sense of what to look for, but the computers are still missing the requisite context to know for sure whether a given phrase is hateful.

“You don’t want to have people targeting ads to something like ‘Jew hater,'” Hamilton said. “But at the same time, if somebody had something in their profile like, ‘Proud Jew, haters gonna hate,’ that may be OK. Probably not hate speech, certainly. But that has the word ‘hate,’ and ‘haters,’ and the word ‘Jew.’ And, really, in order to understand one of those is hate speech and one of those isn’t, we need to be able to deal with understanding the compositionality of those sentences.”

And the technology, Hamilton said, is simply “not quite there yet.”

The solution will likely require a combination of machines and humans, where the machines flag phrases that appear to be offensive, and humans decide whether those phrases amount to hate speech, and whether the interests they represent are appropriate targets for advertisers. Humans can then feed that information back to the machines, to make the machines better at identifying offensive language.

Google already uses that kind of approach to monitor the content its customers’ ads run next to. It employs temp workers to evaluate websites that display ads served by its network, according to a recent article in Wired, and to rate the nature of their content. Most of those workers were asked to focus primarily on YouTube videos starting last March, when advertisers including Verizon and Walmart pulled their ads from the platform after learning some had been shown in videos that promoted racism and terrorism.

The workers now spend most of their time looking for and flagging those kinds of videos to make sure ads don’t end up on them, according to Wired. Once they’ve identified offensive materials in videos and their associated content, they feed the details to a machine-learning system, and the system can in turn learn to identify such content on its own. It’s not an easy job, however, and some of the temp workers Wired interviewed said they can barely keep up with the amount of content they’re typically tasked with checking.

Google’s chief business officer, Philipp Schindler, echoed that sentiment in an interview with Bloomberg News in April, and cited it as a reason he believed the company should cut humans out of the equation altogether.

“The problem cannot be solved by humans and it shouldn’t be solved by humans,” he said.

Until machines can learn the difference between “Jew hater” and “Proud Jew, haters gonna hate,” though, the problem of identifying and flagging hate speech can only be solved by humans–with smart machines assisting them. And there have to be enough of those humans to make a meaningful impact on the amount of content users of Facebook and Google type into the services every day. It may be far cheaper to throw algorithms and overworked temps at the problem than it would be to hire vast armies of full-time workers, but it’s likely far less effective as well.

Facebook and Google have not yet determined exactly what approach they’ll take to keep offensive targeting options off of their ad platforms. Facebook is still assessing the situation, but is considering limiting which user profile fields advertisers can target, according to Facebook spokesperson Joe Osborne.

“Our teams are considering things like limiting the total number of fields available or adding more reviews of fields before they show up in ads creation,” Osborne said in an email to Quartz. (Ads creation is the area of Facebook where advertisers can customize their ads.)

Google said in a statement that its ad-targeting system already identifies some hate speech, and rejects certain ads altogether, but that the company will continue to work on the problem.

“Our goal is to prevent our keyword suggestions tool from making offensive suggestions, and to stop any offensive ads appearing. We have language that informs advertisers when their ads are offensive and therefore rejected. In this instance, ads didn’t run against the vast majority of these keywords, but we didn’t catch all these offensive suggestions. That’s not good enough and we’re not making excuses. We’ve already turned off these suggestions, and any ads that made it through, and will work harder to stop this from happening again,” the company said.

  翻译: