Ethics for designers
Part III

Deontology

April 27, 2020 · 12 min read

Deontology was the result of a surge of new energy in philosophy beginning in the 17th century. After the chaos and destruction of the Black Death, Europeans re-evaluated their relationship to the world around them. Scholars like René Descartes and Isaac Newton took on monumental projects to discover the rules that govern nature. Philosophers like David Hume and Immanuel Kant followed in their footsteps, taking a controversial and critical approach to ethics, morals, society, and politics.

What is deontology?

Deontology is an unwieldy word. It comes from two Greek words: deon, meaning duty, and logos, meaning study. Plainly, deontology is the study of duty. What should we do? What actions are good? How do we choose between doing the right thing and doing the wrong thing?

According to deontology, ethics is all about how you act. Deontologists search for the right set of rules to define and govern moral behavior. Contrast this with virtue ethics, which says that character traits define behavior as ethical or not.

Contrasting deontology with consequentialism is helpful, too. We’ll cover consequentialism in the next installment of this series, but for now, think of it like this: consequentialism takes the view that actions are either good or bad based on the goodness or badness of their outcomes. Deontology says that consequences don’t matter. The deontological view is that logic and reason are all you need to determine whether actions are right or wrong.

A brief history of deontology

Until the 17th century, the virtue ethics of Aristotle, Plato, and Socrates dominated moral philosophy. But during the Enlightenment, European philosophers began to explore alternatives. They wanted to give everyday people clear guidelines on how to live their lives. Their new guidelines were founded on logic and reason, not on feelings or beliefs.

Immanuel Kant, born in 1724, was one of the most influential philosophers of the Enlightenment. Although Kant is best known today as a philosopher, his early work focused on physics. He correctly deduced a number of complicated physical phenomena, including the orbital mechanics of the earth and moon, the effects of the earth’s rotation on weather patterns, and how the solar system was formed.

But at age 46, Kant reached a turning point. He read the work of another influential enlightenment philosopher, David Hume, and felt his life change:

I freely admit that the remembrance of David Hume was the very thing that many years ago first interrupted my dogmatic slumber and gave a completely different direction to my researches in the field of speculative philosophy.1

An avalanche of generation-defining work followed. Kant wrote essay after essay weighing in on growing debates between German scholars. Two books, Critique of Practical Reason and Critique of Judgement, continued his skeptical analysis of the theories of ethics that had come before. But in Groundwork of the Metaphysic of Morals, Kant introduced his most enduring contribution to ethics: the categorical imperative.

The categorical imperative

Kant, a deontologist, argued that ethics is about action. His explanations started from the assumption that anyone could live an ethical life by following the right rules. Kant called rules for action imperatives.

Kant identified two kinds of rules. The first he called the hypothetical imperative. A hypothetical imperative takes the form of “If I want x, I’ll do y.” Some examples of hypothetical imperatives:

Kant didn’t believe that this sort of robotic call and response would lead to a good set of ethical guidelines. If we only respond to desires, we’re not really expressing our free will as rational creatures. We should follow the rules of ethics regardless of any preexisting condition.

The second type of imperative, then, would have to be free of any external motivation. He called these rules categorical imperatives. A categorical imperative is a rule for action that can be applied by anyone, anywhere, at any time. Some examples of categorical imperatives:

But the genius of Kant’s categorical imperative is not in the definition. Kant outlined a series of steps to find these categorical imperatives in our everyday life. This simple procedure is one of the reasons deontology was so influential, and continues to be relevant today.

How to formulate categorical imperatives

  1. Before you act, think about the imperative that guides your decision. I’m in the kitchen at work. I’m really hungry. I have to go to a meeting, and I’m worried that if I don’t eat I’ll be really unpleasant to my teammates. I see a sandwich in the fridge. There’s no name on it; I suspect it might be a coworker’s, but I’m tempted to eat it anyway. What’s my imperative? “If you need to eat, somebody else’s food is fair game.”
  2. Imagine a world in which everyone followed this rule. This is a process Kant calls “universalizing the maxim.” I can imagine a workplace in which the fridge is essentially a donation box. But my imperative only works if there is such a thing as “somebody else’s food.” And if the fridge is a free-for-all, we’re in a communist utopia: there’s no such thing as somebody else’s food. My maxim contradicts itself.
  3. Make sure your rule doesn’t infringe on anyone else’s freedom. Kant believed that we all have a responsibility to preserve each other’s freedom. Refrigerator collectivism means that nobody can bring their own lunch without the possibility of losing it to someone like me. It’s a tragedy of the commons.

If you can define a rule that applies to everyone, and if that rule doesn’t violate anyone’s freedom, then congratulations: you’ve found a categorical imperative. Behaving according to this rule, Kant believed, would result in moral behavior.

Applying deontology to design

If deontology is about the rules of moral behavior, then deontological design is about the rules for designing ethically sound experiences for our users. Finding these rules means following the rules of Kant’s categorical imperative.

First, when designing a screen, workflow, experience, or service, think about the design in terms of rules. Imagine designing the process of getting a verification badge on Twitter: what rules define the experience? Today, there’s pressure on Twitter to make verification easier. The rule: “It should be easy to get verified on Twitter.”

Imagine the rule applied universally. It would need to expand to apply to other social media networks: “Signals of trust on social media should be easy to attain.”

YouTube has demonstrated this rule. Until recently, its signal of trust — a checkmark next to a channel’s name — was easy to get. The only condition for verification was subscriber count: If a channel had more than 100,000 subscribers, a checkmark would appear next to their name. More than 160,000 channels have passed that mark.2 YouTube’s simple verification criteria meant that all of these channels, no matter who created the content or what those videos contained, carried a signal of trustworthiness.

But verified channels weren’t necessarily trustworthy. In 2017, A New York Times story detailed disturbing videos designed to appear in YouTube’s algorithmically generated playlists. These videos targeted children by featuring popular cartoon characters:

“PAW Patrol Babies Pretend to Die Suicide by Annabelle Hypnotized” was a nightmarish imitation of an animated series in which a boy and a pack of rescue dogs protect their community from troubles like runaway kittens and rock slides. In the video… some characters died and one walked off a roof after being hypnotized by a likeness of a doll possessed by a demon.

Many of the troubling videos were uploaded by verified accounts.

In response to this scandal and others, YouTube decided to revamp its conditions for verification. It introduced two criteria:

  • Authenticity: does this channel belong to the real creator, artist, public figure or company it claims to represent?
  • Prominence: does this channel represent a well-known or highly searched creator, artist, public figure or company? Is this channel widely recognized outside of YouTube and have a strong presence online? Is this a popular channel that has a very similar name to many other channels?3

Many creators’ verification checkmarks were slated for removal, prompting a backlash. YouTube reversed course, changing its criteria again. YouTube’s CEO Susan Wojcicki apologized via Twitter: “I’m sorry for the frustration & hurt that we caused with our new approach to verification. While trying to make improvements, we missed the mark. As I write this, we’re working to address your concerns.”

YouTube failed to apply deontological thinking in its decisions around trust and verification. While Twitter’s approach to trust frustrates users, its tightly held criteria and process has ultimately provided a more ethically robust experience.

Tools for deontological design

Design teams have a few tools at their disposal to make decisions with the rules of ethics in mind.

Dogfooding

Dogfooding — shorthand for the advice “eat your own dog food” — is the practice of using your own products. Microsoft adopted this approach when building Windows NT; early versions were incomplete and prone to crashing, but because the developers relied on it, they quickly found and fixed critical bugs.4

Dogfooding also makes ethical bugs visible. How do you think Facebook employees feel about the company’s data policy? How do their feelings change when they’re one of the millions affected by the company’s violation of privacy protection laws?

Using your own products to identify ethical issues reinforces step 2 of the categorical imperative process: what would the world be like if your design framework was practiced by everyone? The easiest way to practice this kind of empathy is on yourself.

Premortems

Postmortems are standard practice on engineering teams. When an unexpected problem occurs, a team analyzes the cause of the issue and reports on ways to prevent it from happening again. Postmortems provide accountability and transparency. They build a culture where failure is okay, where learning is more important than blaming.

Premortems, on the other hand, are less common. In a premortem, a team — engineering, design, or better yet, cross-functional — asks a scary question: “what could go wrong?”

Shannon Vallor, a philosopher at the Markkula Center for Applied Ethics, designed an ethical toolkit that includes premortems. She suggests teams ask the following questions before beginning work:

Ethical premortems help answer the question central to Kant’s categorical imperative: does this approach to design infringe on anyone’s freedom? It’s vital to understand the ways software could be used for harm before building it.

Problems with deontology

Kant boldly proclaimed in 1780 that “a conflict of duties is inconceivable.”5 But a simple illustration can demonstrate that conflicts are common, and that deontology results in counterintuitive behavior.

Say a criminal mastermind has hidden a nuclear bomb underneath Manhattan and threatens to detonate it. You’re an FBI agent investigating the case; you’ve apprehended someone who you believe can tell you the location of the bomb. The detainee is not cooperating. You have to make a choice: should you torture or coerce this witness for the chance of saving millions of lives?

Deontology says that breaking a moral rule — “do not torture people who may be innocent” — is unacceptable, even if the result is catastrophic. Kant is firm on this. “Better the whole people perish”6 he says, than an injustice be done. Sometimes, doing the right thing will have tragic results.

Practicing deontology requires steely resolve and a disregard for the consequences. Some of Kant’s colleagues and contemporaries were horrified: moral behavior, they believed, should result in the greatest outcome for the greatest number of people. This approach to ethics is called consequentialism, and it’s the subject of the next essay in this series.

Conclusion

Deontology lets us step away from the subjective judgements of virtue ethics. It moves us towards a more universal and consistent ethical framework.

How does your work reflect the rules of ethical design? Imagine a reality where everyone acts according to those rules. Picture yourself using apps, websites, and services designed by those norms. It’s a practice that dates back to the third century BC, one that is found in almost every religion and culture: Treat others as you would like others to treat you. The golden rule.

Deontological thinking uncovers the rules that govern ethical design. Tools like dogfooding and premortems can help designers avoid causing harm to their users. The approach isn’t without its pitfalls and paradoxes, but the more we think about the guiding principles of ethical design, the better our products will become.


Footnotes & References
  1. Immanuel Kant, _Prolegomena to Any Future Metaphysics That Will Be Able to Come Forward as Science, With Selections from the Critique of _Pure Reason, edited by Gary Hatfield (Cambridge University Press, 2004). ↩︎

  2. Matthias Funk, “How Many YouTube Channels Are There?,” tubics, published January 31, 2020: https://www.tubics.com/blog/number-of-youtube-channels. ↩︎

  3. Jonathan McPhie, “Updates to YouTube’s verification program,” Creator Blog, YouTube, Google, published September 19, 2019: https://youtube-creators.googleblog.com/2019/09/updates-to-youtubes-verification-program.html. ↩︎

  4. Lee G. Bolman and Terrence E. Deal, Reframing Organizations: Artistry, Choice, and Leadership, Jossey-Bass Business & Management Series (Wiley, 2003). ↩︎

  5. Immanuel Kant. Metaphysical Elements of Justice: Part One of the Metaphysics of Morals, 2nd ed., translated by John Ladd (Indianapolis: Hackett, 1999). ↩︎

  6. Ibid. ↩︎