Open Letter on AI, from an expert on ethical AI

Who am I?

My name is Carey Anne Nadeau and I’ve spent 15 years building AI and other analytics for the benefit of vulnerable populations. First at the Urban Institute, studying the factors that contribute most to re-incarceration (housing stability, employment, reliable transport, and mental health support) and working alongside the Innocence Project to predict the factors that contribute most to false convictions (ultimately resulting in the release of two innocent men).

Iv’e gotten a world-class education from MIT and am responsible for the code that quantifies the Living Wage, in use today by major retailers including the food service industry in NYC, US-based IKEAs, and Patagonia, among others. I’ve published original research at the Brookings Institution to reveal emergent concentrations and the suburbanization of poverty and received National recognition from the Department of Transportation for my original analysis of the economic impact of inaccessibility.

I’m a VC-backed founder and have raised over $30M to date, to build LOOP — a car insurer that use AI to eliminate the bias that make being credit-poor more expensive than being affluent with criminal convictions for Driving Under the Influence (DUIs). Despite being a heavily regulated industry, LOOP is the first insurer to remove biased factors from pricing including, but not limited to: redlining, credit scores, educational attainment, and occupation — helping safe drivers who are unfairly discriminated because of where they live, what they can afford to own, or their credit.

I’m a woman, neuro-divergent, and of Native descent. I was raised in a working-class household with my grandmother and both of my parents worked retail jobs. My mother’s job was outsourced to a robot after 47 years of working at the same company and my father’s body was permanently damaged from hard, physical labor.

What I have come to believe through my experience:

The pace of development and already advanced technology, computing power, and available data outpaces well-intentioned people to regulate it. Ultimately the effort creates anti-competitive, fragmented and poorly enforced regulation on the books. Even then so, we exist in the context of geopolitical reaity that China will have no such laws, a dangerous reality that not only threatens American’s economic superiority but also physical security in a meaningful way.

But if regulation is unsuited for an AI foe and regulators are too old, naive, and intoxicated by the allure or lobbying dollars of today’s Tech Barons, then what will we do to mitigate the real, negative consequences of AI that is trained to exploit and extract profit efficiently at the expense of our values of freedom, privacy, free-markets, and free-will? How will we prevent the reduction of choice and higher costs that comes naturally following the consolidation of industry into the AI-enabled haves and more localized have-nots?

The questions AI regulation should focus on are the things that worry me most about AI:

  1. How will our anti-trust laws evolve? How will we preserve competition if the likes of Google, Microsoft, Facebook, and X are accumulating significant head-start advantages? If it seems outrageous to prioritize breaking up big companies accruing Monopoly profits, that is a sign of how naive we are to the piety of tech moguls (take Facebook for example) and unprepared we are for our current moment. As we seek to regulate AI, one must consider who sits at the table and who has agency.

    A good read / quick + non-technical primer is Tim Wu’s The Curse of Bigness (Crib Notes: NY TIMES Book Review). By way of introduction (and warning), Wu writes: “Extreme economic concentration yields gross inequality and material suffering, feeding an appetite for nationalistic and extremist leadership,”

  2. How will consumers gain a new ability to supervise how data are being used and provide or remove consent? The will and ethic of the American consumer is in a good position to check the unbridaled private sector, with their spending power and focus of attention. Brazilian draft legislation provides a model to ensure the public has the visibility required to provide this check— users are entitled to know when they are interacting with AI, to an explanation about an AI decision, and to contest and request human intervention for “high impact” decisions — albeit it is novel and untested.

  3. How will we maintain national security in the context of China that does not subscribe to a philosophical concept of data privacy? China gives us a window into the power of AI to police as a means of disenfranchisement and criminalize dissent as well as develop powerful, deadly weaponry and surveillance capabilities (Read China’s AI Development Plan). How will we mitigate the likes of Thiel’s Palantir or counter-balance it with different objectives, if we prefer to prioritize peace?

As we seek to regulate AI, author Tim Wu helps to summarize what we should really care about is ‘the economic conditions under which life is lived and the effects of the economy on one’s character and on the nation’s soul.’ I couldn’t agree more - we collectively already own / have the agency to control the character of our country. We must focus on the right questions and wield the sword of regulation when it is appropriate.