The Tech Equity Initiative is a multi-faceted effort designed to eliminate bias in algorithmic-based systems used in housing and financial services, increase transparency and explainability for AI tools, outline ethical standards for responsible tech, advance effective policies for regulating AI tools, and increase diversity and inclusion in the tech field. The goal is to have our “gold standard” of algorithmic fairness adopted by regulators, developers, and consumers of AI-based systems.
Many of the technologies used in housing and financial services are not fair for women and people of color yet they impact every area of our lives. From determinations about whether people can get loans or how much they will pay for them to whether a sick patient can get the healthcare they need, algorithms drive important decisions. A mathematical formula can dictate whether a new college graduate can get a job, a single mom can rent an apartment, or whether a family can get insurance for their new home. Algorithms can be life-altering. That’s why it is so important they are fair and do not infuse bias into the process.
Many factors contribute to unfair and disparate outcomes in tech including structural barriers like residential segregation, the racial wealth gap, and the dual credit market. Adopting solutions that have less or no discriminatory impacts on consumers can help advance equitable opportunities for everyone.
Many of the technologies used in housing and financial services are not fair for women and people of color yet they impact every area of our lives
This Initiative Focuses on Five Main Goals:
- Developing solutions for removing bias from the technologies that shape our lives
- Increasing transparency and explainability for AI tools
- Advancing research to help reduce bias in tech
- Developing policies that promote more effective oversight for AI tools
- Supporting efforts to increase diversity, equity, and inclusion in the tech field
How We’ll Do It
During Phase 1, we will develop a proof of concept for debiasing AI. We will create a risk-based pricing utility to debias logistic regression and machine learning models to show that fair pricing can be offered to people of color, women, and other underserved groups in the mortgage market.
During Phase 2, we will build a range of open-source tools designed to debias tech and engage stakeholders to learn more about how to create fair tech and experiment with methodologies for reducing bias. We will also collaborate with civil rights, industry, regulatory, and academic organizations to conduct research, develop and implement effective policies, and educate key communities to advance tech equity.
A New Algorithm Auditing Approach: The PPM Framework
On Tuesday, March 22, 2022, the authors of the new PPM (Purpose, Process, and Monitoring) auditing framework along with leaders in algorithmic fairness hosted a Virtual Briefing on the National Fair Housing Alliance’s (NFHA’s) new PPM framework. The key stages in algorithmic systems are pre-development, development, and post-development including monitoring and these stages are comprehensively covered in the novel PPM framework.