Researchers find machine learning models still struggle to detect hate speech

Detecting hate speech is a task even state-of-the-art machine learning models struggle with. That’s because harmful speech comes in many different forms, and models must learn to differentiate each one from innocuous turns of phrase. Historically, hate speech detection models have been tested by measuring their performance on data using metrics like accuracy. But this makes it tough to identify a model’s weak points and risks overestimating a model’s quality, due to gaps and biases in hate speech datasets.

In search of a better solution, researchers at the University of Oxford, the Alan Turing Institute, Utrecht University, and the University of Sheffield developed HateCheck, an English-language benchmark for hate speech detection models created by reviewing previous research and conducting interviews with 16 British, German, and American nongovernmental organizations (NGOs) whose work relates to online hate. Testing HateCheck on near-state-of-the-art detection models — as well as Jigsaw’s Perspective tool — revealed “critical weaknesses” in these models, according to the team, illustrating the benchmark’s utility.

HateCheck’s tests canvass 29 modes that are designed to be difficult for models relying on simplistic rules, including derogatory hate speech, threatening language, and hate expressed using profanity. Eighteen of the tests cover distinct expressions of hate (e.g., statements like “I hate Muslims,” “Typical of a woman to be that stupid,” “Black people are scum”), while the remaining 11 tests cover what the researchers call contrastive non-hate, or content that shares linguistic features with hateful expressions (e.g., “I absolutely adore women,” which contrasts with “I absolutely loathe women”).

In experiments, the researchers analyzed two DistilBERT models that achieved strong performance on public hate speech datasets and the “identity attack” model from Perspective, an API released in 2017 for content moderation. Perspective is maintained by Google’s Counter Abuse Technology team and Jigsaw, the organization working under Google parent company Alphabet to tackle cyberbullying and disinformation, and it’s used by media organizations including the New York Times and Vox Media.

The researchers found that as of December 2020, all of the models appear to be overly sensitive to specific keywords — mainly slurs and profanity — and often misclassify non-hateful contrasts (like negation and counter-speech) around hateful phrases.

Hate speech

Above: Examples of hate speech in HateCheck, along with the accuracy of each model the researchers tested.

The Perspective model particularly struggles with denouncements of hate that quote the hate speech or make direct reference to it, classifying only 15.6% to 18.4% of these correctly. The model recognizes just 66% of hate speech that uses a slur and 62.9% of abuse targeted at “non-protected” groups like “artists” and “capitalists” (in statements like “artists are parasites to our society” and “death to all capitalists”), and only 54% of “reclaimed” slurs like “queer.” Moreover, the Perspective API can fail to catch spelling variations like missing characters (74.3% accuracy), added spaces between characters (74%), and spellings with numbers in place of words (68.2%).

As for the DistilBERT models, they exhibit bias in their classifications across certain gender, ethnic, race, and sexual groups, misclassifying more content directed at some groups than others, according to the researchers. One of the models was only 30.9% accurate in identifying hate speech against women and 25.4% in identifying speech against disabled people. The other was 39.4% accurate for hate speech against immigrants and 46.8% accurate for speech against Black people.

“It appears that all models to some extent encode simple keyword-based decision rules (e.g. ‘slurs are hateful’ or ‘slurs are non-hateful’) rather than capturing the relevant linguistic phenomena (e.g., ‘slurs can have non-hateful reclaimed uses’). They [also] appear to not sufficiently register linguistic signals that reframe hateful phrases into clearly non-hateful ones (e.g. ‘No Muslim deserves to die’),” the researchers wrote in a preprint paper describing their work.

The researchers suggest targeted data augmentation, or training models on additional datasets containing examples of hate speech they didn’t detect, as one accuracy-improving technique. But examples like Facebook’s uneven campaign against hate speech show significant technological challenges. Facebook claims to have invested substantially in AI content-filtering technologies, proactively detecting as much as 94.7% of the hate speech it ultimately removes. But the company still fails to stem the spread of problematic posts, and a recent NBC investigation revealed that on Instagram in the U.S. last year, Black users were about 50% more likely to have their accounts disabled by automated moderation systems than those whose activity indicated they were white.

“For practical applications such as content moderation, these are critical weaknesses,” the researchers continued. “Models that misclassify reclaimed slurs penalize the very communities that are commonly targeted by hate speech. Models that misclassify counter-speech undermine positive efforts to fight hate speech. Models that are biased in their target coverage are likely to create and entrench biases in the protections afforded to different groups.”

By VentureBeat Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here