Phillip Herndon

Moral responsibility gaps: The problem for corporations

Five years ago the Business Roundtable, an association of CEOs from the many of the largest American companies, released “Statement on the Purpose of a Corporation.” The announcement was a signal that U.S. corporate leadership recognized the need to move from a shareholder-focused capitalism to a “stakeholder capitalism” undergirded by notions of long-term value, ethical action, fairness and support.

Suppliers, customers and communities are judging the morality of companies through the company’s operating activities, the effects of their products and services, and the actions of their employees. As the Business Roundtable statement shows, corporations are acknowledging the importance of this.

The growing use of artificial intelligence and autonomous systems (AI/AS) has raised new ethical issues for companies. At times the use and development of AI has had detrimental effects on both knowledge workers and workers in developing countries, and AI/AS has profound environmental implications. But there are many examples of how to avoid mass layoffs, exploitation of the global South, and environmental impacts, should a company’s leadership aim to.

AI/AS have placed a spotlight on another set of ethical risks for companies, however, one that is novel for many leaders. The use of AI/AS can create moral responsibility gaps. These gaps can represent a significant risk to companies even if a company and its representatives are acting ethically.

Responsibility gaps exist where autonomous systems act in ways which are not caused, controlled, or predictable to the system’s developers or users. The nature of AI/AS itself causes these gaps to exist, and in their wake they leave outcomes without any reasonable accountable entity.

Below I describe the nature of moral responsibility gaps in greater detail and outline some specific risks the gaps represent for corporations. I then address some of the controversies surrounding the issue, before offering recommendations for mitigating the risks of moral responsibility gaps.

Corporate leaders should take moral responsibility gaps seriously when assessing how to implement or continue implementing AI/AS, and enact mechanisms and processes for mitigating associated risks.

Recommendations include:

  1. Tactics for dissolving responsibility gaps by including people in risky decision making processes
  2. Tactics for identify risks ahead of implementation by pressure testing systems and responses
  3. Frameworks which can limit the impact of identified moral responsibility gaps through culture and resources

What is a Moral Responsibility Gap?

There are many ways we hold each other responsible. In a corporate setting we are often concerned with legal responsibility, who is legally liable for an outcome; causal responsibility, who or what led to an event occurring; and personal or moral responsibility, who or what is accountable for an event or outcome1.

AI/AS create obstacles for our normal sense of moral responsibility. A moral responsibility gap exists where an outcome occurred which we’d typically have a moral accounting of — a target for accountability — but there is no appropriate target for that accountability.

Consider an autonomous surgical robot designed to perform intricate operations with many different sensory and data inputs and risk assessments determining its behaviors. In a particular surgery the device identifies a risky intervention and takes it. The intervention is unsuccessful and the patient ends up dying.

People are unlikely to find blaming the robot sufficient. There’s not a sense in which the robot could have known not to take that intervention2. It was acting as required by its code. Similarly the developer and the user (the doctor, in this case) are not adequate targets for blame, as they did not meaningfully cause the intervention to happen.

As philosopher Andreas Matthias explains, the technology undergirding our AI/AS systems creates this moral opaqueness. In neural networks, for example, a series of hidden layers exist between an input layer and an output layer. The hidden layers are where the AI system has created an intricate system of weights that lead to predefined outputs. The hidden layers are hidden because they cannot be audited or untangled, as they do not correspond to symbolic representations. The weights don’t ‘mean’ anything in a human sense.

These “black boxes” matched with novel inputs and situations can create outputs that neither users nor developers could have predicted.

Corporate leaders generally understand that the moral responsibility for the effects of their products and services begins with the organization. Either with the people who’ve developed the product or service or the company more generally. Moral responsibility can be transferred to users through operations norms, training, manuals and the like (Matthias, 2004). This is how we agree a manufacturer of cooking knives isn’t responsible for morally harmful acts that happen with their products.

The AI/AS black box does not allow for this transfer of responsibility to occur. The user cannot be adequately forewarned on potential risks when a system’s outcomes are inherently unpredictable. Yet it does not seem like we can hold the builder or developers of AI/AS morally accountable either, for the outcomes could not have been predicted or controlled by them, and the calculations that resulted in a negative outcome cannot be traced to actions by the developer. So the moral responsibility gap exists.

Risks

No matter the progress a corporation has made in its transition from shareholder to stakeholder capitalism, the benefits of acting ethically are apparent. Ethical action improves relationships with employees, suppliers and the communities in which a company operates.

Moral responsibility gaps, however, pose risks to corporations even if corporate activities are ethical. When technological tragedy strikes, if there’s no appropriate target for blame, the company itself is likely to be a default foil. Moral responsibility gaps create situations where a company with no justified blame becomes the target of perceived ethical failure.

In the surgical robot example, the manufacturer may be held to popular account for the actions of the robot. Or the doctor or hospital system could be blamed by the public. The opaqueness of the moral responsibility gap creates an uncertainty in who might be a target of blame, even when each person was acting completely ethically.

The risks are more than an ethical roulette wheel, though. Ethical lapses attributed to corporations or their employees present significant financial and reputational risks.

Consumers take a mix of moral and practical considerations into account during purchasing decisions, so corporations could take a financial and reputational hit if faced with a controversy spurred by a moral responsibility gap. Perception of blame, whether deserved or not, could affect consumers’ moral considerations. Similarly, governments around the world have placed tariffs, embargos and other regulations on companies motivated from concepts of national duty, public pressure and safety spawned by moral pressures.

Moral responsibility gaps offer an upside risk to corporations as well. When such a gap exists, the market does not have an appropriate target for praise. These responsibility gaps ensure that some beneficial effects of using or developing an autonomous system could fail to be attributed to the enterprise which developed or used the system. The confusion, or lack of justified moral responsibility, creates the risk of potentially squandering beneficial opportunities for the corporation.

Controversy & Considerations

Philosophers, technologists and ethicists do not agree on how moral responsibility gaps should be resolved, or how they should even be characterized. As Trystan Goetze notes, “It remains genuinely unclear whether computing professionals are morally responsible for the behavior of the systems they develop” (emphasis added).

Some philosophers, for instance, argue that the gaps don’t even exist in a meaningful way. Daniel W. Tigard of the University of San Diego writes that our understanding of responsibility is sufficiently dynamic that technologies like AI/AS do not pose a threat to accounts of responsibility.

Tigard focuses on answerability, attributability and accountability. Requiring that technologies be able to be answerable to their outputs (complicated by the “black box” described above) is misdirected, as we often hold people accountable even when they can’t give good reasons for what they did. Regarding attributability, Tigard notes that there are many mundane examples of when we give moral attributes to objects (like blaming the intentions of your printer when it won’t connect correctly). We have no problem attributing morality to non-humans, he concludes. For accountability, Tigard proposes we can have the same effect of human accountability (e.g. repairing or preventing the harm from happening in the future) by directing accountability to developers or users of the system.

Beyond whether responsibility gaps exist at all, a responsibility gap may be preferable to the alternative. What are we fixing when we close a responsibility gap? Peter Asaro points out that human decision makers often make mistakes, particularly in high-impact decision making situations like we’re describing. It may not be the case that avoiding a responsibility gap leads to better outcomes.

Business leaders too question the benefits of focusing too heavily on the negative effects of AI/AS. In a discussion on responsibility, Tesla and SpaceX CEO Elon Musk said, “if, in writing some article that’s negative, you effectively dissuade people from using an autonomous vehicle, you’re killing people.” The idea being that the slow adoption of even a non-ideal self-driving system would lead to more auto deaths than full adoption.

Recommendations

Even if philosophers and technologists reached consensus, moral responsibility gaps would remain a risk for corporations until a resolution could be broadly justified to the lay person. But the uncertainty regarding moral attribution can be mitigated, if not avoided.

Human-in-the-Loop, Where Possible

Moral responsibility gaps can be dissolved by putting a human in the decision-making loop in the operation of autonomous systems. This sidesteps the problems created by moral responsibility gaps; there are no longer decisions made without human intervention. Human-in-the-loop also limits the autonomy of systems. Teams must analyze the tradeoff between the benefits of autonomous action and risks created by moral responsibility gaps.

Human-in-the-loop comes with its own set of risks, of course. People can get lazy in their relationship with AI predictions, and there’s a risk that human intervention decreases the quality of overall decision making. But ethical risks coming out of these workflows have better structures for moral resolution than situations where a moral responsibility gap exists. It is important for organizations implementing human-in-the-loop as a response to moral responsibility gaps to build feedback loops surrounding the system which prevent the human(s) in the loop from being mere responsibility sinks.

Red Teaming & Crisis Simulations

Red teaming and crisis simulations are one of the most beneficial practices an organization using AI/AS can implement. Red teaming is the practice of gathering a group of experts to simulate malicious, manipulative and unexpected uses of an AI/AS system to identify potential dangers. Crisis simulations gather divisions and teams together to practice responses to crisis situations.

Red teaming can help identify important risks and unintended uses of an AI/AS system, and can be used to inform future safety development of a system. Crisis simulations can also inform structures and workflows around identification and resolution of harmful events resulting from the deployment of a new technology or system. It is beneficial to involve a variety of experts and non-experts from both within and outside the company with both tactics. For instance, in your AI red team consider involving communications and marketing teammates, who have different perspectives on what a risk to the company and its stakeholders might look like than engineering and product teams.

Training & a Culture of Responsibility

Teams should be educated about moral responsibility gaps, how they come about, and what risks they pose. This should not be framed as a transfer or responsibility, but as a tool to better design and implement with the associated constraints in mind.

A culture of responsibility can be strengthened by creating and making resources available to staff. Standard operating procedures defining professional, hierarchical, and personal responsibility create a shared understanding and commitment to responsibility in a corporation. Likewise support mechanisms like ethics hotlines and reporting structures can help identify ethical risks where they appear.


  1. These examples of types of responsibility are drawn from both Vallor (2023) and Goetze (2022).

  2. This and following based on requirements for personal responsibility outlined by Goetze (2022). Namely, to be personally responsible an agent must have had control over the outcome, have a reasonable expectation of knowledge that the outcome would or could occur, and have causal responsibility for the outcome.

View original

#AI #ethics #philosophy #risk