Our Programs

Tech Equity

The Tech Equity Initiative is a multi-faceted effort designed to eliminate bias in algorithmic-based systems used in housing and financial services, increase transparency and explainability for AI tools, outline ethical standards for responsible tech, advance effective policies for regulating AI tools, and increase diversity and inclusion in the tech field. The goal is to have our “gold standard” of algorithmic fairness adopted by regulators, developers, and consumers of AI-based systems.

Many of the technologies used in housing and financial services are not fair for women and people of color yet they impact every area of our lives. From determinations about whether people can get loans or how much they will pay for them to whether a sick patient can get the healthcare they need, algorithms drive important decisions. A mathematical formula can dictate whether a new college graduate can get a job, a single mom can rent an apartment, or whether a family can get insurance for their new home. Algorithms can be life-altering. That’s why it is so important they are fair and do not infuse bias into the process.

Many factors contribute to unfair and disparate outcomes in tech including structural barriers like residential segregation, the racial wealth gap, and the dual credit market. Adopting solutions that have less or no discriminatory impacts on consumers can help advance equitable opportunities for everyone.

Many of the technologies used in housing and financial services are not fair for women and people of color yet they impact every area of our lives

Structural barriers drive unfair outcomes in tech.

Learn More

This Initiative Focuses on Five Main Goals:

  • Developing solutions for removing bias from the technologies that shape our lives
  • Increasing transparency and explainability for AI tools
  • Advancing research to help reduce bias in tech
  • Developing policies that promote more effective oversight for AI tools
  • Supporting efforts to increase diversity, equity, and inclusion in the tech field

How We’ll Do It

During Phase 1, we will develop a proof of concept for debiasing AI. We will create a risk-based pricing utility to debias logistic regression and machine learning models to show that fair pricing can be offered to people of color, women, and other underserved groups in the mortgage market.

During Phase 2, we will build a range of open-source tools designed to debias tech and engage stakeholders to learn more about how to create fair tech and experiment with methodologies for reducing bias. We will also collaborate with civil rights, industry, regulatory, and academic organizations to conduct research, develop and implement effective policies, and educate key communities to advance tech equity.

Read about TEI’s Impact So Far

Click Here
 
Virtual Briefing

A New Algorithm Auditing Approach: The PPM Framework

On Tuesday, March 22, 2022, the authors of the new PPM (Purpose, Process, and Monitoring) auditing framework along with leaders in algorithmic fairness hosted a Virtual Briefing on the National Fair Housing Alliance’s (NFHA’s) new PPM framework. The key stages in algorithmic systems are pre-development, development, and post-development including monitoring and these stages are comprehensively covered in the novel PPM framework.

NFHA’S Voice on AI and Tech Bias

NFHTA Forum: Mining the Data: Algorithmic Bias in Housing Related Transactions

Algorithmic-based data systems are increasingly relied on to screen potential rental applicants, underwrite home mortgage loans, insure residential properties, and for other housing-related purposes. Data sets and algorithms that are unrepresentative, insufficient, and biased perpetuate discriminatory outcomes. These outcomes include denial of housing opportunity, unfair terms and pricing, and continued housing segregation. Understanding how technological […]
2/20/2022

Racism leads to science that is biased, exclusionary, and even harmful. We’re experts on the ways racism and lack of diversity harms STEM and perpetuates inequalities – let’s discuss!

Though science aims to be unbiased and objective, the backgrounds, experiences, and perspectives of who is doing and informing the research impact the questions asked, analysis done, and who benefits from the outcomes. Cultural ideas about people influence how scientists view animal species. The make-up of research teams impacts whose stories are worth investigating, what kinds of questions get financially supported, […]
9/4/2021

RFI and Comment on Financial Institutions’ Use of AI, Including Machine Learning

We the undersigned civil rights, consumer, technology, and other advocacy organizations are writing in response to the Agencies’ March 31, 2021 Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, including Machine Learning (the “RFI”).1 We applaud the Agencies for seeking input on the critically-important topic of artificial intelligence (“AI”) and machine […]
9/4/2021
We’re Hiring

Join Our Team

If you are interested in being a part of a dynamic team working to eliminate bias and increase transparency, explainability, and responsible policies in the tech space, please check out our available career opportunities.