Ethical Things website screenshot Prototype of  Ethical Things

Ethical Things

Whenever we talk about the Internet of Things in terms of “smart” devices, we’re talking about a certain level of artificial intelligence: The ability of objects to increasingly make independent decisions about their own behavior.

So far our devices mostly make choices based on logical or mathematical reasoning, and the decisions are deemed “intelligent” if they meet our needs as human users. Lights come on when we enter the room, dim when we start watching a movie, and go off when we climb into bed; thermostats factor humidity, weather patterns and sunlight levels into their algorithms and switch the heat on when we head home from work. The decisions may be complex, but the reasoning is straightforward: Do what the user wants.

But as smart devices become more pervasive and the systems they interact with become more intertwined throughout our lives, automated decisions start to have moral implications. Healthcare devices, self-driving cars and battlefield robots are just a few examples of objects whose choices can have life-or-death consequences. How should devices make decisions then?

Ethical Things is one attempt at an answer, from designers Matthieu Cherubini and Simone Rebaudengo (creator of the Addicted Toaster that won Postscapes’ 2014 Editors’ Choice IoT Award for Design Fiction). Their solution: Ask a human being what the object should do.

Playing on Amazon’s crowdsourced tasking platform Mechanical Turk, the practice of “Ethical Turking” involves programming objects to refer their moral quandaries to flesh-and-blood people, who are—at least in the eyes of some philosophers—hardwired for ethical reasoning.

The test case is a rotating fan that knows there are several people in the room, each of whom may be working hard or kicking back, thin or overweight, healthy or ill—and each has their own baseline temperature preference. How should the fan decide which person to focus on, and for how long? In exchange for a small online payment, a human Ethical Turk will analyze the situation and send back a decision, complete with a written explanation of their reasoning.

Of course, different people often draw on wildly different moral perspectives. The fan itself includes a number toggles to set the age, sex, education level and religion of the Ethical Turks it should seek out for guidance; but that’s no guarantee that the feedback it receives will be based in sound moral reasoning. In practice, many of the answers generated by the project reveal a bias against overweight people, while others spread the fan’s attention equally regardless of the preferences of those on whom it’s blowing.

Ethical Things
Crowdsourced "Moral" Machines: Ethical Things
A closer look at the Ethical Things project
Ethical Things

The difficulty of finding a consistent—much less a “correct”—way of making the decision is exactly the point. By reminding us of how challenging moral reasoning is for human beings, Ethical Things demonstrates how hard it will be to create objects that can act morally on their own. And perhaps an even greater challenge will be to decide what counts as “moral” behavior for objects in the first place.

See the full project page and process here.

Related: Moral Machines,The Lifecycle of Software Objects,Edge: Annual Question - Things That Think?

Credits: Matthieu Cherubini and Simone Rebaudengo

Video

Featured in Channel:

IoT Art - Real Time Networked Art Installations

A spotlight on the products and companies

View