You Are What You Code

Algorithmic Bias, Financial Inclusion & Gender

Does credit-scoring AI used by fintech companies discriminate against women?

A recent study by Women’s World Banking suggests sexist AI contributes to the $17B gender credit gap. Read the full report here.

NEW YORK – A new study from Women’s World Banking found that credit scoring artificial intelligence (AI) systems employed by global financial service providers are likely to discriminate against women, excluding women from loans and other financial services. The study’s findings suggest that financial technology companies are missing a major opportunity to close the existing $17 billion gender credit gap and help reach the nearly 1 billion women who remain unbanked.

The study, Algorithmic Bias, Financial Inclusion, and Gender, funded by the Visa Foundation, explores the promises and pitfalls of using digital tools to open up new credit to women individuals and entrepreneurs. Specifically, it examines where biases in AI emerge, how they are amplified, and the extent to which they work against women.

The financial services industry needs to act immediately to address sexism in credit scoring technology – not only because it’s the right thing to do but also to better equip the industry to take advantage of a $17 billion market opportunity given the gender credit gap,” said Mary Ellen Iskenderian, CEO of Women’s World Banking. “This issue isn’t hypothetical – sexist credit scoring systems pose a real threat to women’s livelihoods, their families, the growth of their businesses, and the health of the economies to which they could contribute.”

Women’s World Banking researchers examined the data that app-based digital credit providers collect in order to create algorithms and conducted interviews with thought leaders and academics as well as digital credit practitioners, including data scientists, entrepreneurs, app developers, and coders. Women’s World Banking has also created a free interactive tool that allows researchers and practitioners to explore various bias scenarios.

The study found:

  • Algorithms themselves are often biased because the individuals creating them have unconscious biases that they code into the algorithms.
  • Biases also emerge because of incomplete, faulty, or prejudicial data sets that companies use to “train” the algorithm.
  • The majority of data sources are vulnerable to gender-based bias.
  • Data scientists and algorithm developers on the whole (U.S.-based, male, and high income) are not representative of the end customers being scored.

“Women have historically suffered from discrimination in lending decisions – and we can’t allow that to continue into the digital realm. Alternative credit scoring data can be a boon for women entrepreneurs who are often denied credit because of a lack of information. We need AI technologies to help women, not work against them,” Iskenderian added.

The financial services industry needs to act immediately to address sexism in credit scoring technology – not only because it’s the right thing to do but also to better equip the industry to take advantage of a $17 billion market opportunity given the gender credit gap...

Algorithmic Bias, Financial Inclusion, and Gender recommends easily implementable and inexpensive strategies that financial institutions could use to reduce bias, including:

  • Identifying gender-based discrepancies in data by producing regular reports evaluating the issue.
  • De-biasing scoring models by creating audits or checks to sit alongside the algorithm, and/or running post-processing calculations to consider whether outputs are fair.
  • Making bias everyone’s responsibility to address—from data scientists to the CEO. One way to do this is by establishing an internal committee to systematically review algorithmic decision-making.

Excerpts Algorithmic Bias, Financial Inclusion, and Gender from the Study

Finding bias is not as simple as finding a decision to be “unfair.” In fact, there are dozens of definitions of gender fairness, from keeping gendered data out of credit decisions to ensuring equal likelihood of granting credit to men and women. We started with defining fairness because financial services providers need to start with an articulation of what they mean when they say they pursue it.

Pursuing fairness starts with a recognition of where biases emerge. One source of bias is the inputs used to create the algorithms—the data itself. Even if an institution does not use gender as an input, the data might be biased. Looking at the data that app-based digital credit providers collect gives us a picture of what biased data might include. Our analysis shows that the top digital credit companies in the world collect data on GPS location, phone hardware and software specifications, contact information, storage capacity, and network connections. All of these data sources might contain gender bias. As mentioned, a woman has more unpaid care responsibilities and is less likely to have a smartphone or be connected to the internet.

There are many easily implementable bias mitigation strategies relevant to financial institutions. These strategies are relevant for algorithm developers and institutional management alike. For developers, mitigating algorithmic bias may mean de-biasing the data, creating audits or checks to sit alongside the algorithm, or running post-processing calculations to consider whether outputs are fair. For institutional management, mitigating algorithmic bias may mean asking for regular reports in plain language, working to be able to explain and justify gender-based discrepancies in the data, or setting up an internal committee to systematically review algorithmic decision-making. Mitigating bias requires intentionality at all levels—but it doesn’t have to be time consuming or expensive.