The National Fair Housing Alliance Releases A New Framework for Auditing Algorithmic Systems: Purpose, Process, and Monitoring (PPM)
The PPM Framework establishes the gold standard for auditing algorithmic models including Machine Learning and AI systems
WASHINGTON, DC—The National Fair Housing Alliance (NFHA) released a new structure for auditing algorithmic systems called Purpose Process and Monitoring (PPM that captures the life cycle of a model – pre-development, development, and post-development, including monitoring. This framework provides an approach for evaluating internal controls and mitigating risks that may be inherent in algorithmic systems and harm consumers.
Headlines abound about ways algorithms discriminate against and harm consumers in the areas of health, employment, housing, law enforcement, marketing, credit, and more. Discriminatory models can lead to loss of housing, loan denials, job loss, equity- and wealth-stripping, incarceration, debilitating health, and other devastating consequences.
The PPM framework enables regulators, businesses, researchers, civil rights groups, and other stakeholders to conduct a critical analysis of an algorithmic system to identify its assumptions and limitations and produce appropriate recommendations to mitigate consumer fairness and privacy risks. “NFHA hopes it will become a gold standard for auditing algorithmic systems used in the housing and lending sectors as well as in other fields where algorithms are being used to make decisions that impact consumers,” stated Lisa Rice, President & CEO of NFHA.
“NFHA’s PPM framework provides an equity-centered auditing solution at a time when policymakers and civil rights organizations are calling for fairness, accountability, transparency, explainability, and interpretability,” stated Dr. Michael Akinwumi, NFHA’s Chief Tech Equity Officer. “The framework is system-oriented, and it covers every decision point involved in designing, developing, deploying, and monitoring an algorithmic solution. It will drastically limit the capacity of models to harm consumers if embraced by the industry,” he added.
“This framework advances from earlier work NFHA has done to inform policies around the use of biometric technologies, identifying and managing bias in Artificial Intelligence, and financial institutions’ use of AI and Machine Learning, as well as standards for federal regulatory policy,” explained Snigdha Sharma, Tech Equity Analyst at NFHA.
The PPM framework also builds on existing resources such as CRISP-DM (Cross-Industry Standard Process for Data Mining), Model Risk Management supervisory guidance of the Board of Governors of the Federal Reserve System and Office of the Comptroller of the Currency, and National Institute of Standard’s proposal for identifying and managing bias in Artificial Intelligence, to mitigate consumer risks associated with big data systems.
NFHA started its Tech Equity Initiative after decades of work to mitigate bias in data-driven systems used in the housing and finance sectors. The Initiative has five main goals: developing solutions for debiasing tech; increasing fairness, transparency, explainability, and interpretability of AI tools; advancing research to help reduce bias in algorithmic systems; developing policies to promote effective oversight for AI tools; and supporting efforts to increase diversity, equity, and inclusion in the tech field.
Click here to access the Purpose Process and Monitoring (PPM) framework.
###
The National Fair Housing Alliance (NFHA) is the country’s only national civil rights organization dedicated solely to eliminating all forms of housing and lending discrimination and ensuring equal opportunities for all people. As the trade association for over 170 fair housing and justice-centered organizations throughout the U.S. and its territories, NFHA works to dismantle longstanding barriers to equity and build diverse, inclusive, well-resourced communities.