Double Hard-Debias: Tailoring Word Embeddings for Gender Bias Mitigation

Double Hard-Debias: Tailoring Word Embeddings for Gender Bias Mitigation

Word embeddings inherit strong gender bias in data which can be further amplified by downstream models. We propose to purify word embeddings against corpus regularities such as word frequency prior to inferring and removing the gender subspace, which significantly improves the debiasing performance.

4 mentions: @SFResearch@danieljpeter
Keywords: embedding
Date: 2020/06/30 20:30

Referring Tweets

@SFResearch Our new work, Double-Hard Debias, proposes a new method to mitigate the negative effects that word frequency features can have on debiasing algorithms. Keep reading to learn more 📚 Blog: t.co/NaVKGliFzp Github: t.co/OebgYUV69e Paper: t.co/HA5G6dhJXk

Related Entries

Read more Learning to retrieve reasoning paths from the Wikipedia graph
0 users, 6 mentions 2020/02/24 17:00
Read more [2003.00381] Statistical power for cluster analysiscontact arXivarXiv Twitter
0 users, 6 mentions 2020/03/03 23:20
Read more Effortless optimization through gradient flows – Machine Learning Research Blog
0 users, 6 mentions 2020/05/01 14:21
Read more [2006.11477] wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representationsopen sea...
0 users, 6 mentions 2020/06/23 14:21
Read more PyTorch Recipes — PyTorch Tutorials 1.5.1 documentation
0 users, 2 mentions 2020/07/09 17:21