It’s time to take a critical look at AI Ethics. What do we hope to accomplish? Law enforcement, marketers, hospitals and other bodies apply artificial intelligence (AI) to decide on matters such as: who is profiled as a criminal, who is likely to buy what product at what price, who gets medical treatment and who gets hired. These entities increasingly monitor and predict our behavior, often motivated by power and profits.
The world can be an awful place. In his recent opening speech to the General Assembly, UN Secretary-General Antonio Guterres cited “the asymmetries and economic disasters rampant on our planet and getting worse.” No, not like the 19th or 20th century – or all of them for that matter – but can we reign in the expanding power of these technologies, by talking about ethics and fairness? I believe there are levels of bad behavior with AI, from innocuous to worse:
- Trolling someone on the internet and mocking them with deep fakes and personal information.
- Inadvertently being careless about developing AI applications that harm many people, just one at a time, such as credit, employment, etc.
- Developing AI applications that harm large groups of people through unfair classification action.
- Deliberately doing #2, not from evil intentions, just from a drive for personal or organizational advantage.
- Destroying privacy.
- Promoting disinformation on a wide scale.
- Using AI to destroy the planet and everything on it .
There are other levels, too – you get the point. But AI has given us the “ethicist,” a new breed of specialist with deep roots in academic ethics and moral philosophy, but generally lacking in perspective on the prerogatives of organizations to pursue their strategies and processes for introducing new technologies. In some cases, the ethicist’s advice and counsel are valuable, especially in items 1-4 above. Is it realistic to suggest that simply educating people and organizations about the lurking ethical traps of AI will dissuade anyone? They weigh the potential risk of getting caught with the hoped-for benefit of not.
In a brilliant paper, The Forgotten Margins of AI Ethics, author Abeba Birhane (et al.) reveal the deficiencies in quality and depth of current AI Ethics discourse:
[It’s] important to note that algorithmic fairness is a slippery notion that is understood in many different ways, typically with mutually incompatible conceptions.
It is possible to define fairness measures mathematically, and develop algorithms to weigh whether a model exhibits levels of fairness. However, without a definition of fairness, any mathematical estimation is purely speculation. The authors explain why that’s a trap:
The treatment of concepts such as fairness and bias in terms is frequently linked to the notion of neutrality and objectivity, the idea that there exists a ‘purely objective’ dataset or a ‘neutral’ representation of people, groups, and the social world independent of the observer/modeler.
This is the central fallacy of fairness, and bias “audits” conducted computationally.
This is often subsequently followed by simplistic, reductive, and shallow efforts to “fix” such problems, such as “debiasing” datasets — a practice that may result in more harm, as it gives the illusion that the problem is solved, disincentivizing efforts that engage with the roots causes (often structural) of the problem. At a minimum, “debiasing” approaches place problems on datasets, and away from contingent wider factors.
The authors provide some guidance, by introducing the social-over-technical issues. But where is the method?
An approach to ethics that ignores historical, societal, and structural issues omits factors of critical importance. It is not uncommon now for AI experts to ask whether an AI is ‘fair’ and ‘for good’. But ‘fair’ and ‘good’ are infinitely spacious words that any AI system can be squeezed into.
A question to pose is a deeper one: how is AI shifting power? In Kalluri’s paper in Nature, Don’t ask if artificial intelligence is good or fair, ask how it shifts power, Kalluri argues that:
Genuine efforts to ethical thinking are those that bring about a concrete material change to the conditions of the most marginalized, those approaches that shift power from the most to the least powerful in society. Without critical considerations of structural systems of power and oppression in a central role in the ethical considerations of AI systems, researchers, designers, developers, and policy-makers risk equating processes and outcomes that are fair with those that are equitable and just.
AI misuse includes disinformation, destruction of privacy, the environment and human rights, authoritarianism and rapid development of LAWS (Lethal Autonomous Weapons Systems) on a wide scale. What remedy is there for the monumental problems we have in this world, when AI is deliberately deployed in harmful ways? This is a good answer from Kalluri, but the focus on marginalized populations needs to be completed. The focus on marginalized populations is the prevailing corpus of “AI ethicists,” primarily confined to academics, foundations, institutes and governments, all of which have little perspective on how AI is not only capable of, but is aggressively pursuing unfair, biased, dangerous applications. All the talk about ethics is simply that: talk. The bulk of discussion on this topic is a giant fishbowl or echo chamber.
The state of AI ethics is that in the last few years, a proliferation of initiatives on ethics only developed a narrow set of principles and guidelines. As “ethicists,” few have made any real impact in modulating the effects of AI, because they failed to understand the subtleties and life cycles of AI progress and its consequences. The study of ethics is discursive – it has yet to give a final answer. Engaging with the dangers of scale that AI-related algorithmic systems pose requires understanding and accounting for the underlying power structures. This is especially true where AI systems are adapted to work within ongoing systems of power and oppression to scale the effects of those systems efficiently.
AI ethics needs a broader, historical and structural understanding of the challenges we face.
My take
Language is rooted in culture — the new is only understood by analogy to the familiar — and finding the right metaphors or tools is particularly difficult when so much about AI is unlike anything that has gone before. Large-scale technological transformations have always led to profound societal, economic, and political change, and it has always taken time to figure out how best to respond.
Let me end with a hopeful anecdote: My father was born in 1913. He graduated high school in 1930 and made his way through the Depression, organizing unions, fighting in WWII and marrying and raising a family on a small business in a small town. In his ninety-five years, he comfortably adapted to the technologies of the 20th century. I don’t recall him ever using the phrase “the good old days” except when referring to how inexpensive things used to be. So I’m grateful I don’t live in those good old days, mainly all of them since we emerged from the mud.