11/9/2023 in News & Media, NFHA News, Responsible AI/Tech Equity, Testimony

Lisa Rice’s Testimony Before the Fourth Bipartisan Senate Forum on Artificial Intelligence

Share

Follow Lisa Rice on X at @itslisarice

Contact

National Fair Housing Alliance

Introduction
I am Lisa Rice, President and CEO of the National Fair Housing Alliance (NFHA), the country’s only national civil rights organization dedicated to eliminating all forms of housing discrimination and fostering well-resourced, vibrant, resilient communities. With 200 organizational members, NFHA is the trade association for fair housing groups that work at the local, state, regional, and/or national levels to advance fair housing and equitable opportunities.

Throughout this document, I will use the terms Artificial Intelligence (AI) and Automated Systems (AS) interchangeably with the understanding that AI or AS is driven by business logic, data, model or policy implementation.

NFHA has addressed harms associated with AI or automated systems since its inception in 1988. We first concentrated our efforts on prohibiting or restricting the use of discriminatory automated systems such as credit and insurance scoring, underwriting, and pricing models, in housing and financial services.

Early settlements with entities like Prudential, State Farm, Nationwide, and Allstate addressed these discriminatory systems. Several years ago while litigating a major case against then-Facebook, it became even more clear that technology, including AI or Automated Systems are the new civil and human rights frontier and, as a civil rights agency, we had to be a leader in this sector. Thus, we established our Responsible AI division with an initial focus on Tech Equity. The division is comprised of researchers and engineers committed to civil and human rights principles and is headed by one of the world’s premier AI Research Scientists, Dr. Michael Akinwumi. NFHA’s Responsible AI division has five workstreams founded on each of the following technical and policy research pillars:

  • Tech Equity: We focus on developing and advocating for methodologies that ensure automated systems offer equitable access to housing opportunities.
  • Data Privacy: We strive to test and promote technologies that balance consumer privacy with the need for data access to eliminate bias in automated systems.
  • Explainability: We advocate for consumers’ right to explanations for automated decisions and work to test and promote methodologies that clarify the reasoning or design behind automated systems.
  • Reliability: We focus on testing and advancing techniques to ensure only safe and valid automated systems are used in housing applications.
  • Human Alternative Systems: We work on advancing technical and policy solutions to determine when human-centered alternatives should take precedence over automated systems in housing decisions, particularly when data quality is poor, infrastructure is inadequate, or there is a lack of social awareness about harms of automated systems.

Since launching our Responsible AI work, NFHA has contributed to, advocated for, and created technical and policy solutions that advance responsible use of technologies in housing, including the White House’s AI Bill of Rights, the National Institute of Science and Technology’s Risk Management Framework, the development of a state-of-the-art framework for auditing algorithmic systems, and other policies.

Automated systems impact every area and aspect of our lives. They can provide access to key services that can open the doors of opportunity, or block our ability to take advantage of critical amenities that we need to survive and live successful lives. Automated systems can determine whether consumers will have access to housing, get a living-wage job, access quality credit, get an accurate and fair value for their home, obtain life-saving healthcare, receive compensation from their insurance company for a loss, get released on bail after an arrest, or serve a prison sentence.

Whereas the math behind automated systems is not good, bad, or neutral, it is imperative that we rapidly work to eliminate bias from these systems.

Studies reveal that structural inequality, including the harms perpetuated by unethical tech, are not only having a deleterious impact on individuals and communities, but it is stifling the nation’s economic progress. Many innovations have been made in the math of automated systems. For example, it can be used to avoid, mitigate, or manage biases innate in these systems. Much as scientists used the coronavirus, a deadly germ that has killed millions of people in the world, to develop life-saving vaccines, we can intentionally use math to detect, diagnose, and cure technologies that are extremely harmful to people and communities.

Read The Full Testimony Here