You've successfully subscribed to Adlive Content Hub
Great! Next, complete checkout for full access to Adlive Content Hub
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.
StereoSet measures racism, sexism, and other forms of bias in AI language models

StereoSet measures racism, sexism, and other forms of bias in AI language models

AI researchers from MIT, Intel, and Canadian AI initiative CIFAR have found high levels of stereotypical bias from some of the most popular pretrained models like Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. The analysis was performed as part of the launch of StereoSet, a data set, challenge, leaderboard, and set of metrics for evaluating racism, sexism, and stereotypes related to religion and profession in pretrained language models.

The authors believe their work is the first large-scale study to show stereotypes in pretrained language models beyond gender bias. BERT is generally known as one of the top performing language models in recent years, while GPT-2, RoBERTa, and XLNet each claimed top spots on the GLUE leaderboard last year. Half of the GLUE leaderboard top 10 today including RoBERTa are variations of BERT.

https://venturebeat.com/2020/04/22/stereoset-measures-racism-sexism-and-other-forms-of-bias-in-ai-language-models/