On a whim, I decided to attend a New Scientist event to learn more about Artificial Intelligence (or Machine Learning) today to find out more. I read the newspapers (so I know stuff like Facebook using their subscribers as guinea pigs in seeing how good/bad news affects mood) and the easier articles in New Scientist, but my life doesn’t really bump up against the most recent developments. A certain limited use of an ipad is as far as I get.
It was a brilliant day – interesting and demystifying, although there was not much that was completely new to me. Nothing too techy. The good news is that the Terminator scenario is highly unlikely.
From my notes, with some gloss of my own:
Basically, machines with artificial intelligence are able to communicate with and respond to you; make decisions; determine their environment and take action. Machines such as Alpha Go from Deep Mind (now part of Google – the ownership of AI is problematic) was able to beat the world’s Go expert recently. (Go is much bigger in Korea and S.E. Asia than here. It appears to be played with mint imperials.) The Deep Mind programmers watched as their “baby” learned more about Go in 3 days than humans have learned in decades . . . and it created moves that humans had never thought of. However (as later speakers pointed out), it’s just a programme to play a game. The fact that one of the speakers illustrated Go with a picture of two middle-aged men bent companiably over a board one evening, backlit by the light from a shop made me question its value. But it is useful: it demonstrates machine learning in action.
Personal digital assistants which perform little tasks for you (which you can do perfectly well yourself, I harrumphed; the banality of some of the things AI does brings Marvin the paranoid android to mind) have been programmed to appear human friendly. So you tell it that you’re not feeling well . . . and it responds with “I’m so sorry to hear that.” (One of several WTF moments for me. It teamed nicely with the apology from my lift.)
Machine learning is also getting creative (cue the Lovelace Test, perhaps more significant than the Turing Test: could you tell a work of art had been created by artificial intelligence?). There are artists who make programmes so that computers can produce paintings in no recognisable genre that are admired by humans . . . until you tell them that a computer created it, and then they suddenly discover that they don’t like it as much as they thought. Its figurative stuff was meh, but it used colours in a pleasing way. But AI is also capable of telling you what it wanted to paint and how it feels about the finished product. This is distinctly spooky – it’s using words, but how can those words carry any sincerity?
Deep Blue beating Gary Kasparov at chess in 1997 was one of the AI high points, but everything rather stalled after that. There has been more interest in recent years – possibly linked to the exponential rise of computing power, which now gives the machines in your pocket near-magical powers. This – and, presumably, the fact that a couple of men have become rich beyond the dreams of avarice – rekindled recent development.
There was one slide of what the inside of the brain looks like when a human thinks: the phrase “neurons fire” is literal – they do suddenly light up like a firework.
Artificial intelligence machines judge each other’s work to improve it (I state that without fully understanding it); it’s called Generative Adversarial Network (ditto).
A great quote, which I’ve heard before:
The question of whether machines can think is about as relevant as whether submarines can swim.
Edsger W Dijkstra
Deep Mind is now the world Go champion, way ahead of any human. But that’s practically all it can do. It’s an algorithm that can learn and adapt, and it was developed by playing Atari games (ah, Pong – that takes me back). It got so good at them and worked out new ways of winning that its human programmers regarded it as a cheat! As it learned, it used less power, but equally as it learned new things, it forgot its previous knowledge. (Like me and logarithm tables, then.)
The popular view of robots is as “metal people”, but this is misleading. Robots have potential for defined tasks, but banal things like getting a glass of water from the kitchen is way beyond them (and likely to remain so for a long time). Robotic AI is already here, and is possibly the most fruitful use of AI: industrial, drones, domestic (e.g. vacuum cleaners – where do I get one?!), driverless vehicles (but plenty of problems there), NLP, medical, mining, drones, logistics (e.g. Amazon robots in their warehouses), and military (I learned of “slaughterbots”).
The problems with robots are:
- Do we trust them?
- Do they have ethics? (which means “how will they be programmed?”)
- Will they replace us?
- What does it mean to be human?
- Are “killer robots” coming?
There were some amusing insights: in Japan it was discovered that the robot vacuum cleaner didn’t work as well as in Europe. This was because there was a craze in Japan in decorating them with cats’ ears, which messed up their vision system.
The speaker was particularly interested in robots working together to solve problems. She used swarming models from nature (e.g. ants, bees, starlings), and it was startling to watch miniature robots bumping off each other and communicating to solve a task.
A problem with robots is the risk of failure or sabotage; their constant connectivity makes them vulnerable to disruption – malicious or otherwise.
I was introduced to the concept of weak/narrow AI (i.e. dedicated tasks) and strong/general AI, which is very far ahead.
LEGAL AND ETHICAL IMPLICATIONS
The problem is that the training data sets (original data that algorithms work on) and the algorithms themselves are not transparent; often they are proprietary information and hence can legally remain opaque. This leads to problems (which I have read of); for example, AI used in the US for determining sentences for convicted offenders may be based on biased data*. Chatbots, which “learn” from their online interactions with humans, can be quickly and easily “taught” to be racist or vile in other ways if they are exposed to such utterances. Other problems are: who is liable in, e.g., the case of a driverless vehicle being involved in a death?
THE FUTURE OF AI
AI now has better success at image recognition (mostly of cats, it seems to me) than humans: about 5% error. However, unlike humans, which can learn from just one example, AI requires many examples (although it sifts through them pretty quickly).
Humans no longer necessarily have the manual or cognitive advantage over AI in certain jobs or activities.
The impact on future employment is impossible to forecast, so newspaper headlines of 15 million jobs in Britain to disappear should be taken with a pinch of salt. The data on which this figure was based include an assumption that there is a 98% probability of bicycle mechanics being replaced by AI. Well, I for one would like to see a robot change a Brompton rear wheel! I’d willingly take it on holiday.
China is going to be very significant in what happens with AI; they are investing billions. (I did read the other day that, with the government’s – ahem – relaxed attitude to data protection, China is able to work with vast amounts of data about people’s movements and activities and design their new cities accordingly. And other things besides.)
The speakers responded to written questions at the end.
What stood out for me:
There is a form of co-evolution: humans are learning from AI as well as vice versa. For example, the European Go champion is now much higher up the rankings since he started playing with (and generally losing to) Deep Mind. Or the way surgeons and AI can co-operate to produce more accurate diagnoses than either working alone.
There’s a potential problem in competing algorithms – e.g. two driverless cars in a scenario where a crash is going to happen. Which one gives way?
The behaviour of AI is not routinely foreseeable once it gets the bit between its teeth. (Ooof – that metaphor is odd.)
There is the danger with algorithms that determine, say, whether you should get a loan or what your life insurance payments should be that if someone knows how they work they can be gamed.
AI shows how far a machine can go without being intelligent (as humans understand the term) or having insight. (Hmmm . . . not only AI.)
There is a danger of humans giving up too much data to AI companies and leaving ourselves open to manipulation or control. (See China above.)
One speaker forecast that humans may end up adapting to low-level AI and, unwittingly, giving the impression that AI is rather cleverer than it really is – a kind of dominance by AI and humans adapting to AI rather than the other way round. Perhaps AI as the mediaeval aristocracy and humans as the serfs. And that’s the Terminator-lite scenario in a nutshell.
* I subsequently read of facial recognition software used by the police whose training data sets were predominantly of white faces; consequently it’s currently not very accurate at distinguishing between black faces.