What even is AI anyway?

Published: 26 Jun 2019
  Tags: AI, Ethics, Article

AI is trendy and alluring; it has recovered in interest from several AI winters and is now harnessing millions of dollars in investment, yet it is also deemed as one of humanities biggest existential threats. How can we make sense of any of this? Lets take a quick step backwards and ask:
What exactly is it?

This is by no means an exhaustive history, nor is it a technical deep dive. This article is a collection of historical and cultural references, anecdotes collected from my 4 years working on the [Machine Ethics Podcast](www.machine-ethics.net) and my own thoughts on the questions at hand. I write this both as an enthusiast and a teacher of data science (mostly with the lovely people at Decoded.com). I write this now as it seems to me: never has there been such a time where asking the question what is AI? has been met with such a wide range or responses.

TL;DR - When someone refers to AI out of context, ask them: "What does it mean to you? What aspects of this technologies, goal, cultures, philosophy or belief structure are you conversing with currently? From this vantage point we should be able to have more specific, positive, less confused conversations about Artifical Intelligence." Here we will walk though some of these ideas.

Let’s start with a quick anecdote: after 2 years of the Machine Ethics podcast I felt my friends and family should probably be bored of hearing about AI related news and theories. However, on viewing an article in the news with an image of Terminator titled: Terminator-like AI movie death scenarios are just 'one to two decades away’, it seemed they we’re back to square one, all the nuance and depth of the conversation lost in idle future gazing… I’m not saying that some of the world’s most “intelligent” people are wrong, I’m was just fed up of having “that”  conversation. Let’s look at some of the breadth of the matter:

Of Golums and Greeks

The recent AI: more than human exhibition at the Barbican centre in London has an excellent preamble into AI technologies by displaying cultural references to human designed creatures such as the Golem, Prometheus, Robots, Automata and Japanese Kami (Dr Jennifer Whitney has a great introduction talk to the cultural artefacts of AI and Robots as well as the tension that they create here). For a couple of millennia (at least) there has been a cultural heritage in the idea of non-human human-like intelligence. In this instance I'm referring to the intelligence attributed to nor human things that show human-like autonomy: in the story of the Golem they are essentially dumb in their inner world but respond to commands with autonomous intelligence, where as Kami spirits can be tricksters, helpful, malevolent etc. with inner worlds and goals.

These stories served as inspiration, praise and fearsome tales to provide ideals, belief systems and warn of the dangers of technologies (apt perhaps in this current technological climate). In the Greek myth of Prometheus, he creates human life out of clay and the fire of the gods, he is ultimately punished for his transgression. It's no wonder we have misgivings about creating new life out of inanimate stuff.

Regardless, during the the Renaissance there become a craze for more impressive Automaton. Automaton came in all shapes and sizes with two of the most famous comprising a geese that ate and laid eggs, and a humanoid chess player. Automaton were about imitating nature through mechanic design–entertainment for the aristocracy. The Automaton were almost taboo, titillating, a they hinted at a manmade autonomy, a sense of godly power and indeed an adesigned inner world in order to create these actions. Also elaborated at the Barbican exhibition, these realisations of the idea of intelligent stuff often appeared somewhere on the Uncanny valley, creating slightly repulsed responses from onlooker if not already reproachful with religious beliefs. It could be said to be cognitively jarring to see something obviously inanimate behaving animately. It simply isn’t a world we know.

Of Monsters and Robots

Frankenstein metropolis iRobot 2001 astroboy (Mighty Atom) ghost in the shell moon - note able for conforming to the now troupe of AI rebelling against its human overlords.
Cultural artefacts enable us to lead conversations with some familiarity, to discuss concepts and indeed their repercussions. This is most evident in the artefacts discussing AI themes in the 20 century. It is now a well worn troupé in Science Fiction that in the future there will be Robots, AGI's (artifical general intelliegnces), or similar biological based made-made creation, and most of them want us dead! From Frankenstien's monster to 2001's Hal, Metropolis' robot Maria and Terminator's... well Terminators, they all pose as antagonists in some way to human kind (though this reading is extremely simplified). Few mainstream films paint future AGI's as beneficent or passive–I can think of Moon and Interstellar off the top of my head (also enjoyable films).

It is apparent that our technology is changing how humans live and many conversations around direction of our endeavours are portrayed in fictions. A well known fiction holds the basis for the conversation of the academic field Machine Ethics: Isaac Asimov's iRobot. IRobot is the door-opener for many on the conversation of endowing machines with ethical reasoning. Though again it is a fiction demonstrating how NOT to proceed, it gives us a way of thinking about machines that have some amount of autonomy to complete tasks,  and as Nick Bostrom theorises: we'd better be bloody sure we instruct those tasks correctly or things might come back to haunt us. I feel Nick Bostrom's paperclip thought experiment must have been influenced by Asimoz's work, where a super-intelligent AI could be given a task that ultimately would be more efficiently conducted if we didn’t exist. Most of Asimoz's Robots and, indeed, the paperclip maximiser don't necessitate a sentient AI (with an inner world), these machines have abilities to reason and enact instructions to a super human level. It is perhaps the realm of philosophy to help deduce whether there is a requirement to have inner consciousness to achieve super human capabilities–the AI's don't necessarily know why they're doing the action or have any internal “feelings” about it.

So without feelings we get iRobot (though not Bicenntinal Man), Paperclip maximisers and HAL. With Feelings we get Ghost in the Shell, Astroboy, or Marvel's

Alan Turing and John McCarthy

In the beginning of the 20th century computers graduated from mechanical curiosities to intricate encryptors to finally industrial revolution denoting the information age towards the latter part of the 20th century. Though there is lots we could talk about here, it was perhaps Turing who’s ideas shone brightest at this time. Though John McCarthy hadn’t yet coined the term Artificial intelligence after the success of cybernetics, Turing  had already postulated the universal Turing machine (that could compute any computable instruction) and the Turing test.
The Turing test is a demonstration of the idea that at some point computers will be so capable (trying not to say “intelligent” here) that they could even trick us (presuming you’re a human reading this) to believe they are human. This idea has resonated through the decades, first resounding with the famous Dartmouth Workshop of 1956 giving birth to the term, but also academic appetite for a mostly engineering discipline dedicated to the notion of human like thought capabilities (or intelligence).

Marrying the pursuit of AI with the already popular ideas of Robotics, automation (Ford Automotive famously the icon for early automation), and the Automaton brings forth a potent mix of ideas that indeed gave birth to Terminator, iRobot, 2001 and many many other cultural fictions of a future with Artificial Intelligence.

The glorious and the mundane

Harnessing computational progress has been our preoccupation over the last several decades. More, faster,  cleverer. We have toppled chess champions by brute-force computation. Gained almost omnipotent communication and informational potential that slots into our pockets. Seen a step change in algorithmic thinking to beat the best human GO players. It is no wonder that the ideas of AI  existential risk are hi on many peoples minds.

The technology is surely glorious, but has also brought with it many new downsides too. But for our purposes much of the newness in AI research and recent conversation has been due to Machine Learning algorithms. Again ML is one of those slippery terms, for us they are statistical modelling algorithms, used to create predictions. Some of them can be understood by hand, others are extremely unclear how they infer at all. AI in many jobs today can be described as the mundane (in terms of the lack of magic or wonder) use of ML frameworks to predict an outcome from data. Whether you look go for a loan or not, whether that email is spam, what you’re luckily to buy. Artificial intelligence for many has been conflated in this way as it is both practical and close to hand. There are Universities, online learning courses, conferences, companies dedicated to making better predictions from incoming data.
Though the philosopher in you might say: “well thats not a world away from humans turning energy and information into action” it still seems a world away from what we now call AGI (artificial general intelligence - a system capable of reasoning in many domains). It is also not completely clear how the road to AGI may be trodden and what will happen when we get there..?!

What do I mean when I say AI?

I hope through reading this that you can see there is a melting pot of ideas and heritage in Artificial Intelligence and its ideas. What do I see when people ask me about AI? Well I see HAL from 2001 a space odyssey, complex predicaments from iRobot, Ghost in the Shell internet consciousnesses, and more besides. However, how do I think about AI? I concern myself with genetic algorithms, Machine Learning and games mostly. Why? Because they are both close at hand, interesting and hugely beneficial. Whether AGI, super-intellegence is a boon or bane is totally unclear.



Quotes from the podcast in response to the question: what is AI to you?

  • ... doing the right thing at the right time. AI being an artificial version - Rob Wortham
  • ... a moving of the goal posts, what is AI is constantly moving - Cosima Gretton
  • Not a technology but a goal - Greg Edwards
  • Soon as it works no-one calls it AI anymore ...  zero intelligence, amazing capabilities - Luciano Floridi
  • ... achieving goals - Christopher Noessel
  • Something that can generalise - Damien Williams
  • Machine Learning, you get an input, it’s processed by software and you get both an output but also an update to the software - Miranda Mowbray


© Ben Byford | website created by Ben Byford using Processwire