A month ago, I was trying to demonstrate how to fix gender bias in contextual models like BERT for my book. I found the bias was way worse than I imagined and major technologies failed to even recognize "hers" as a pronoun. just covered it
We all need to learn to be anti-racist. That includes our AI algorithms!
A great illustration of how training an AI on historical data embeds historical biases into the AI. This is why it's essential that we must treat explainability as a first class requirement for these systems.
I know so many brilliant black women who work on these issues like ⁦ & and yet ⁦⁩ couldn’t do better than quote some white scholars.
Artificial Intelligence systems aren't just learning our *current* biases; the data we are feeding them also teaches them historical biases that we've moved beyond.
This NYT article about bias in #AI credits the excellent , but doesn't mention the foundational work of academics of color. They refer to 's tweet on Google labeling photos of Black people as gorillas without mentioning his name.
As new, more complex A.I. moves into an increasingly wide array of products, tech companies will be pressured to guard against the unexpected biases that are being discovered.
File in ' was right compartment' "We Teach A.I. Systems Everything, Including Our Biases. Researchers say computer systems are learning from lots and lots of digitized books and news articles that could bake old attitudes into new technology."
Understanding and removing bias from AI algorithms is an ongoing but important challenge. Knowing the source of the data used to train the algorithm is an important first step. explores this in a great article.
AI will not save us from the worst in ourselves, it will reveal it. Translation: Bias in-Bias out
How is working to advance a very important technology that is focused on improving today’s capabilities that are needed to avoid the inevitable bias in current machine learning approaches.
Some A.I. systems "are more likely to associate men with computer programming, for example, and generally don’t give women enough credit." Okaaaay.
BERT, universal language models, and biases in data. #AI
Yet the latest example of #biasedmachines picking up on their human creators' biases. It should be no surprise that "garbage in, garbage out" is just as true in the domain of software as everywhere else.
We Teach A.I. Systems Everything, Including Our Biases
We Teach A.I. Systems Everything, Including Our Biases. Researchers say computer systems are learning from lots and lots of digitized books and news articles that could bake old attitudes into new technology
Are AI Systems Learning from Our Bad Behavior? We Teach A.I. Systems Everything, Including Our Biases #AI #bias #big_data #learning #ELMO #ERNIE #BERT
We Teach #AI Systems Everything, Including Our Biases via &
AI systems are socially biased because (surprise!) they learn from humans
HPSG Easter Egg: zoom in on the book in the image of me in the recent story that featured & me I used HPSG from Pollard & Sag to help create a test suite
We Teach A.I. Systems Everything, Including Our Biases
This really shouldn't surprise anyone. We Teach A.I. Systems Everything, Including Our Biases
#AI Systems Echo Biases They’re Fed, Putting Scientists on Guard: The dangers of baking old attitudes into new technology, by via
A.I./“universal language models,” echo biases they’re fed, putting scientists on guard: 'Researchers say computer systems are learning from lots and lots of digitized books and news articles that could bake old attitudes into new technology.'