Ethics and Algorithms
Published: 26 May 2015
This piece is a longer version of the talk I did for Digital Shoreditch #DS2015
Imagine you have a startup, maybe you do. You’ve received some complaints from customers that cats are being killed by your autonomous car software, “thats the fifth this month”, they mention at the board meeting. Being the CTO you maybe know that the system itself is working correctly, and you reassure the board that maybe the outcome is lots of unfortunate cats. Maybe it’s your company and you’re worried about the bad press the accidents are creating around your product, you fear your business customers will pull out and go to another automated software supplier. Perhaps you’re the provider of the automated cars and you’re concerned you might be sued, knowing there hasn’t been any precedent for such a trial your not sure which way it would go. Maybe you’re not concerned to much as you’re car’s aren’t hurting human beings, a few cats are a small price to pay. You’re told that something needs to change or evolve with the software, you decide to push some intent onto the passenger in cat based situations and as a consequence cars start crashing into each other on their own.
But again, the system hasn’t had a virus, isn’t broken, it’s working as intended. It is effectively a problem of ethics.
Machines and algorithms involve intended and unintended consequence. Machine Ethics is about how machines can change our behaviour, harm us or others as well as make us better. It’s about automated cars, medical machines, Apple's Siri, the Amazon Echo and many other things that are in our homes on our persons, or on our roads. It’s the stock market, and its about your new startup. It’s about the inevitable crashes and the big news stories about cats and people. It’s about blame, emotion and the society we want to live in.
And it's about decisions that are being made now about our technological future.
What are ethics anyway?
The humanistic view - a Set of guiding moral principles that predict an action or reaction to environments by a being or group of beings. In a deterministic view, they are the rules that guide our actions. However, human ethics are unlike a hard coded program they are changeable. They’re relative in time and ideaology. Once it was generally accepted that certain actions were permissible such as slavery, where as homosexuality, women’s rights, or even rights to property were not.
Dogma
In Jaron Lanier's ‘You are not a gadget’ he points out a dogmatic problem with technology. In his example the musical protocol MIDI defines all musical data into 255 bits, this was probably originally done for lack of bandwidth or data storage but 30 years on it has become so embedded that it is still used and is compressing a complex nuanced system of sound and performance into 255 bits at a time. If we want changeable ethics, which isn't reductionist, evolving rules that deal with our changing environment, the systems will need to change with us, not created absolutely in only some many bits or bytes.
The infinite
Ethical problems are often considered in the finite, for example the Trolly Problem examples a life or death situation with two finite outcomes (both a catch 22). However, in the real world outcomes are mostly a luxury of hindsight. There are finite problems that can be modelled before hand, such as a ball bouncing, or not giving a diabetic their insulin. There are also infinite problems where it is harder to say whether any one choice would yield the best outcome, for example a car swerving an obstacle, or a football game result. Most outcomes are tied up with so many systems that to model all possibilities would be impracticable to say the least.
The neutral machine
Tools, or in this case machines, could be said to have no agency, or intent, they could be seen to be neutral - a scalpel in the hands of doctor may aid curing, but in the hands of psychopath it may aid killing. As more machines are programmed to make more actions in the world the intent of the program without the guidance of the programmer tools can no longer be seen as neutral agents as their actions implicate them and their code. For example, can we ever discover the intention behind an action a machine makes when it makes it based on a network of learned behaviour, is it even useful to think this way? All one can say is that it achieved it's intent through it's action for better or worse. It might be that the way we think and talk about such unknowns needs to be changed, or perhaps a sense of traceable intention needs to be added that we can interrogate, something that is currently absent in most AI.
Is there a universal code of ethic?
Programming ethical machines predisposes a universal ethic. The problem here is various and, as described earlier, can be a problem of relative flexibility. There are lots of ethical theories which will yield differing results when applied to a situation. One of the most fascinating in this instance is the theory of virtue. The Ancient Greeks would describe their gods as the most virtuous - ethical discussions could be differed to a godly way of thinking. As machines have no family, no vices, or allegiances per say, we could be creating ethical programs that would out strip human decision making and embody this idea of the virtuous, being more godly. Though it could be said that this strips the humanity or duty from the equation these virtuous machines could be thought of in the 'other', knowing that they aren’t pretending to be human they can look outside our humanistic view for ethical solutions.
Some thinkers have the opinion that ethics isn’t reducible to code. Whether this is true or not is a moot problem. Machines are already being given the power to kill and save us, so if it isn’t a layer of ethical decision making then it is a layer of something that should enable us to know what to expect from any machine agent whether traditionally thought of as an ethic or not.
The Business of Business
Business, and indeed our captialist system, is mostly concerned with profit to shareholders not necessarily the greater social good. All the major players in the technology industry are racing to create services that will monopolise a sector (probably a topic for another article) using ‘intelligent’ algorithms or machine ‘learning’. The future is currently in the hands of shareholders, entrepreneurs and programmers. There is a question to whether this much power should be deferred to so few, those few whom money, fame or scientific endeavour are the goals of the day.
Martin Daum, the president and CEO of Daimler Trucks, the world's first licenced automated vehicle, said that "liability is not a question for today, but for the future". They’re not thinking about liability, so what else aren’t they thinking about? Also, when I get run over by an automated truck, I’m not really going to be so concerned about liability… This sort of apathy is abhorrent and plagues the ‘build it and they will come’ mentality of the tech industry. Should one build it? How will it impact the world? What does the future look like with and without this piece of technology? What else should in place to support this innovation?
Education
“Educating the mind without educating the heart is no education at all”, Aristotle.
And finally, as Aristotle did, I believe education should play a big part in the machine ethics debate. I am yet to hear of a computer science, design or entrepreneurship course that teaches the social-ethical impact of the things that they we go on to create.
This isn’t an article about answers or opinions but our current situation and an assertion that we should think about ethics in how we make technology and the businesses behind them.