Inescapable AI

The Ways AI Decides How Low Income People Work, Live, Learn, and Survive

Executive Summary

The use of artificial intelligence, or AI, by governments, landlords, employers, and other powerful private interests restricts the opportunities of low-income people in every basic aspect of life: at home, at work, in school, at government offices, and within families. While popular discourse has recently centered on the newest versions of AI that generate answers, reports, or images in response to users’ questions or prompts, such technologies derive from a lineage of automation and algorithms that have been in use for decades with established patterns of harm to lowincome communities. As such, now is a critical moment to take stock and correct course before AI of any level of technical sophistication becomes entrenched as a legitimate way to make key decisions about the people society marginalizes.

Employing a broad definition of AI, this report represents the first known effort to comprehensively explain and quantify the reach of AIbased decision-making among low-income people in the United States. It establishes that essentially all 92 million low-income people in the U.S. states—everyone whose income is less than 200 percent of the federal poverty line—have some basic aspect of their lives decided by AI.

While these decisions may not be universally negative, the use of such technologies over the past two decades demonstrates the capacity for broad, systemic harms with immense suffering at scales and speeds that were impossible with the human-centered methods that precede them. AI is often inaccurate, absurd, biased, and inscrutable. Even when it is not plagued by these problems, it is often carrying out a purpose that is unfavorable to low-income people, such as cutting benefits or making housing, education, or employment opportunities harder to access.

Where humans make decisions without AI, even wrong or biased ones, their scope is limited to their personal reach. Only elected officials, corporate executives, or high-ranking government staff might make decisions for masses of people, and they generally must do it through formal government or corporate policy processes that involve at least minimal levels of public or stakeholder scrutiny.

AI, though, enhances the ability of both high-ranking and lower-level decision-makers to act at scale, often through informal or surreptitious means. They can cut the in-home care of 4,000 disabled people in Arkansas despite them having medical conditions that have not gotten better, falsely accuse 40,000 people in Michigan of Unemployment Insurance fraud, or subject 4,000,000 people in Texas to the potential loss of health insurance through a labyrinthine Medicaid enrollment system that even agency staff cannot navigate—all real examples in which the bulk of harm happened within months of the AI system going live. And that is just in the area of public benefits. AI-based decision-making also happens on grand scales with high stakes in other areas, such as housing, employment, K–12 education, domestic violence, and child welfare.

READ MORE

Comments are closed.