“Technology has no ethics – but human society depend upon ethics,” says my friend, the Futurist Gerd Leonhard. As technology progresses exponentially, our rules and regulations which were arrived at long ago in a linear society no longer fit and need to be updated. “Instead of asking ‘why’?” we keep asking ‘how’?” he says.
Instead of asking how we can draw the most profit out of technological advances, we should be asking why we need them in the first place and what things like AI and self-learning algorithms, self-driving cars and self-automating systems will mean for humanity in the long run. And we need to start soon, before our machines become smarter than us and begin to take over decision-making in areas far beyond our human understanding – and control!
Exponential technologies tend to pass through three stages, Leonhard maintains: first magic, then manic, and finally toxic. Since all of this is happening at digital speed, finding the right balance and deciding where to call a stop gets more difficult every day. He therefore calls for a Global Digital Ethics Council which he would like to see established under the aegis of the UN and consisting of people from all walks of life; normal citizens, academics, governments, business and technology companies, as well as “free-thinkers”, as he calls them: writers, artists, and intellectuals from various fields and schools of thought. Their job would be to serve as an early warning system endowed with the authority to monitor and, when necessary, inform the public about potentially harmful, illegal or immoral developments in technology.
But first, of course, we need to agree what constitutes morality in the Digital Age. Which school of ethics should we be following? In his book Machines of Loving Grace, John Markoff, an American journalist, writes: “Today, decisions about implementing technology are made largely on the basis of profitability and efficiency.” It would be truly surprising, he believes, if a Silicon Valley company rejected a profitable technology for ethical reasons – but that is exactly what needs to happen if we don’t want the boat we are all sitting in to go on the rocks.
Charlotte de Broglie, CEO and founder of For The Future, a French consulting company based in Paris, helps her clients develop strategies for digital transformation. She bemoans the fact that most companies focus shortsightedly on utility over social, economic and cultural considerations that reflect real human needs. Something needs to change, she believes, and we should start in our schools, during vocational training and at our colleges and universities. The digital thought leaders, developers of new digital solutions and products, mathematicians, engineers and computerologists of tomorrow should be given a firm interdisciplinary grounding in technology assessment, says de Broglie.
“Unless digital actors, and, indeed, all citizens, are given the means to ponder, develop and foster an autonomous vision that reflects their values, we will inevitably drift towards digital autocracy,” she told delegates to the OECD (Organisation for Economic Co-operation and Development) Forum in 2016. “If all hope of an independent vision of digital technology is abandoned, its applications will ultimately be dictated by all-powerful multinationals, thereby strengthening their grip and adding to global imbalances, especially in the area of internet governance,” she concluded.
Digital technologies should be part of a bigger picture that needs to be patiently and carefully developed by theorists, scientists, engineers, digital creators and civil society in order to co-construct an empowering ethical dialogue and discourse. In the area of human-computer interaction, there can, and should be, a systematic ethical encounter, without slowing the momentum of innovation.
Can machines be moral?
Digital ethics can be put to entirely practical uses, of course. One area where demand for “machine ethics” is growing involves autonomous vehicles, in other words self-driving cars, trucks, and busses. When I asked a friend of mine, Janina Loh, a charming young professor of philosophy at the University of Vienna about this, she laughed: “Okay, but which school of ethics do you mean – there are so many of them!” In her new book Trans- and Posthumanism, she cites the now-famous Trolley Problem, a thought experiment in ethics first proposed by the British philosopher Phillipa Foot, in which she tried to demonstrate the difficulty of asking machines to reach moral decisions. It goes like this:
You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:
- Do nothing and allow the trolley to kill the five people on the main track.
- Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the most ethical option?
Common sense says pull the damn lever, and be quick about it! And in fact, philosophy, or at least the philosophical school known as Utilitarianism which is very popular in the US, would demand that you do so. According to Jeremy Bentham (1748-1832), the founder of the Utilitarianist school, a decision is morally defensible if it leads to the “greatest good for the greatest number”.
Another approach would be by purpose-oriented ethics, or ‘ethics of action’ dictated by the deontological ethics of Immanuel Kant, the famous 18th century German enlightenment philosopher. In his major work, The Critique of Pure Reason (1781), Kant argues that taking a human life, regardless of the circumstances, is immoral.
What began as a funny thought experiment was suddenly thrust to the fore in March 2018 when the long-expected happened: a self-driving Uber car ran over and killed a pedestrian crossing a street in in Tempe, Arizona – the first fatal accident of its kind. Immediately, calls came for a built-in system of ethical decision making, but of course, no one mentioned which kind of ethics they meant.
Almost as tricky to answer is the question of liability. Who should pay in a case like this? The car’s programmer? Uber? The person sitting in the car but not actually in control of it at the time? Or should it be attributed to force majeure; an unforeseeable circumstance?
Take your pick – and, in fact, we may soon be forced to. Machine ethics is less a technological and more a societal problem. Lawyers may squabble over who is to blame and whether it’s time to bring our traffic laws up to date but, at the end of the day, it will be us, the citizens of the Digital Age, who decide what we think is right and should be enshrined in law.
Janina Loh calls for mandatory ethics classes in schools and universities. “We can’t put this off for coming generations to decide,” she says. “We need to educate people at various levels here and now in order to create ethical awareness!” Similarly, she feels tech companies, especially those engaged in AI development, should be required to offer their employees ethics training to make up for the lack of such instruction during their educational upbringing. Finally, she concludes that all big companies should be obliged to establish ethics commissions, such as those already mandated by law in some US States or by various county and city ordinances, to investigate dishonest or unethical practices by public employees and elected officials.
So, besides digital rights and opportunities, we find that we also have digital obligations; the most important being to help shape our common future and create rules which we must follow if society is to survive.