The algorithms that detect hate speech online are biased against black people
The meaning of our words is all about context, something that computerized algorithms — not to mention the humans who create the datasets that train them — inevitably miss. Relying on algorithms to police hateful and abusive speech will backfire.
"leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English"
Research from doctoral student has found "significant racial bias" toward African Americans in an algorithm used to detect hate speech in online comments:
The algorithms that detect hate speech online are biased against black people
A new study shows that leading AI models are 1.5 times more likely to flag tweets written by African Americans as “offensive” compared to other tweets.
The industry is unable to see abuse as related to power differentials. In their attempts to be "equal", they of course end up harming those with less power.
In creating AI to detect hate speech, "we need to be more mindful of minority group language that could be considered ‘bad’ by outside members," says Maarten Sap, PhD student, UWGradSuccess
Vox interviews PhD student about broken hate speech detection algorithms (work published at ACL 2019 with , Saadia Gabriel, , ) ... one more time, CONTEXT MATTERS when it comes to language
The algorithms that detect hate speech online are biased against black people. A new study shows that leading AI models are 1.5 times more likely to flag tweets written by African Americans as “offensive” compared to other tweets
Computational linguistics in the news... 's Discourse Processing Lab analyzes the linguistic characteristics of fake news: + Two papers at #ACL2019 uncover how AI models for detecting hate speech can be prone to racial bias: .
The algorithms that detect hate speech online are biased against black people,by Shirin Ghaffary /