What are responsibility gaps?
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
Computers are good at making decisions. Throw some criteria together, some weighting, put in some initial conditions and theyâll tell you what comes of all of it. Some decisions are really complicated for humans but really easy once youâve programmed it all into a computer. Things like keeping a nuclear power core within a specific temperature range or deciding what should come up first in a search engine.
Someone at IBM was taking a principled stance on the decisions we should offload to computers, saying that a computer should never make management decisions. Why? Because a computer cannot be held accountable.
There are some decisions that seem like we need some human responsibility behind. Things like whether we should fire or promote Jim, or where should we go on vacation. Sure, with the right criteria we could trust the computerâs output, but thereâs something human about these decisions that wants for a human agent to take responsibility.
First, let me make one thing clear, computers arenât âmaking decisionsâ or âchoosingâ like a human is. Computers, software, AI systems arenât agents as we think about people being agents. Itâs not instinct or deliberation or anything anthropomorphic that results in their outputs. So they canât be responsible in a way that we consider people can. They canât be morally responsible in particular.2
Weâve reached a point with computers, though, where we canât trace responsibility as clearly as we could in the past. This seemingly creates responsibility gaps, where a moral outcome occurs, one which intuition says should have someone we should praise or blame for it, but no one seems to be a good target. Computers (which Iâll just use as shorthand for all kinds of technologies including software, automata, systems programmed under the techniques of computer science) are being programmed in such a way that the outputs and paths that produce those outputs were not created by the programmer, and could not be predicted by the programmer or the user.
The responsibility gap
Andreas Matthias wrote a great paper (pdf) that really opened the door to discussions on responsibility gaps.
We have pretty good methods to trace responsibility concerning technology. For most tools and technology, responsibility for harms or benefits falls either on the manufacturer or programmer (the builder) or the user. If a technology is meant for a certain task, it should do that. If it malfunctions and causes harm, the builder is responsible. The way we generally transfer responsibility from builder to user is through an operations manual or operations norms. If a user operates a technology against the rules by which itâs meant to be used, the user is responsible for any harmful effects.
The âmanualâ doesnât have to be explicit. Matthias uses the example of a candle. If a candle user sets a lit candle close to a curtain and burns the house down, that doesnât seem like it should be the builderâs fault. Itâs generally accepted that one rule of using candles is to be careful about such things. If the user of a candle lights it and it explodes, the builder bears the blame and responsibility. The candle isnât working as expected.
But! There are new technologies that muddy this. Neural networks are designed to mimic the connections in a biological brain. These networks are made up of nodes in three or more layers: an input layer, one or more hidden layers, and an output layer.
We can figure out what is in the input layer and the output layer, but the nodes and connections that make up the hidden layers are, well, hidden from the builder and user.
You might use a neural network to help identify photos of milk. The neural network would take data comprising the photos in its input layer, and in the hidden layers work to make connections filtering out any but the milk photos. How it would identify the photos would be on criteria weighted node by node.
The network must be trained to set its weights. The builder might feed it a few million photos of milk and a few million photos of non-milk that it would use to create the criteria and weights for deciding whatâs milk and whatâs not. The hidden layers are hidden because there isnât symbolic information stored in the system, just weights between nodes. No third party can go back and parse what pathways were taken to reach the output. This is the âblack boxâ you hear about some AI technologies.
Systems that learn by adaptation and reinforcement are similar. They are programmed to try different solutions, receive feedback, and alter how they work so they consistently get the right answers. These systems can optimize in dynamic environments, but they must do so by trial and error. The system is designed to get things wrong sometimes in order to improve its parameters. Again the parameters arenât predefined by the builder, so what criteria the system uses to create its outputs arenât predictable.
So sometimes, one of these systems can produce an outcome that was
- Not predictable by the builder, and
- Not the effect of any userâs actions or intentions If the outcome is harmful (or beneficial) where does the blame (or praise) lie? The system itself isnât an agent that can be held responsible. âA computer can never be held accountable.â And since neither the builder nor the user had control over the systemâs output, it doesnât seem like itâs right to hold either of them responsible.
This is a responsibility gap, as there isnât any agent or agents who are an appropriate target for responsibility.
Are these gaps novel?
My first thought when talking about responsibility gaps is that hey, there is no responsibility gap. If you built it you are responsible for what it does. When the user is using something correctly, the responsibility for harms are on the builder. Easy enough.
But looking deeper, the builder really doesnât have the control that we associate with responsibility. Sure, they created the system, and for a purpose, but when and why the system gets the answers wrong, potential emergent capabilities or seeming capabilities are unpredictable.
This isnât new. Corporations are systems that produce unpredictable outcomes and develop unpredictable properties with moral implications. When corporations fire people, tens of thousands of people, say, it creates real pain and suffering for real people. Society hasnât really agreed upon if a person or people bear responsibility there or not. When a company produces morally good outcomes a CEO or leader will associate themselves with it, but responsibility for mass layoffs is obfuscated, at least rhetorically. Itâs an outcome of the system, not the responsibility of the leaders of the company who built or manage the corporate system, or of the people who âuseâ the system by being employed.
Ethicist Daniel Tigard looks into the mechanics of responsibility gaps at a more personal scale and concludes that the gap doesnât exist. Responsibility is a complicated concept, and in non-technological situations we navigate similar obstacles in accountability and responsibility without admitting an unbridgeable gap. Holding responsible can be communicative, it can be reparative, preventative. Tigard argues that our concept of holding responsible doesnât rely on control. With a wider definition of responsibility we hold animals responsible for their actions, we hold technology responsible for dropping our calls, not printing our papers. We even hold people responsible for things outside of their control, or for actions that they canât give good reasons for (due to something like implicit bias, say).
The question about responsibility in regard to AI technologies, then, wouldnât be who is responsible, but how are our actions of holding responsible going to improve future outcomes? As communities, we can negotiate and decide what guidelines, obligations and consequences will best prevent the harms that are important to us and promote the benefits.
So the IBMâmer who says a computer can never be held accountable isnât exactly right, but theyâre starting an important discussion. How do we want to govern our relationships with technology?
If we want to codify that technologies canât be held responsible, then weâll need agreements on fallbacks in âresponsibility gapâ situations like these. If we want to allow for some flavor of responsibility for these systems, weâll need to specify what remedial or preventative measures we develop, and who bears the brunt of that.
Computers donât have to be agents to be part of the community behavior of holding responsible. It seems like if weâre clear about how we manage responsibility, we can govern these systems and the agents involved fairly.
I looked hard to try to find the whole deck, but could not. It looks like this slide first appeared on Twitter in 2017 from user @bumblebike. According to them the whole deck was lost in a flood before they got around to scanning the rest. They had one more slide to share here. If you know more contact me.
Itâs hard to talk about the activities of computers without using terms we usually use for human action. Iâm not trying to say that computers choose or think or act. They dryly have processes that produce outcomes.
Linked in this post:
Matthias (2004): The responsibility gap: Ascribing responsibility for the actions of learning automata (pdf)
Tigard (2020): There is no techno-responsibility gap
@bumblebike: IBM slide tweet