Things to remember from a couple of New Scientist articles (the easier ones) I have just read:
- The term “Artificial Intelligence” was first coined in 1956 when a team of scientists and engineers at Dartmouth College started working on this new field.
- Goals included machine translation, text understanding, computer vision, speech recognition, control of robots and machine learning.
- First attempts to programme computers were top-down: engineers expected to create mathematical models of processing speech, text or images which could then be implemented in the form of a computer programme.
- It was even hoped that breakthroughs in AI would help to understand our own human intelligence.
- After a few decades of unfruitful work along these lines, engineers turned to bottom-up approaches – i.e. sifting through lots of simple data to detect correlations. Early successes were in things like recommending purchases based on similar transactions in the past.
- in 1997 IBM’s Deep Blue computer – capable of analysing thousands of chess games and working out optimal parameters from them – beat Gary Kasparov.
- The combination of statistical learning algorithms and masses of data offered no insight into human intelligence – but it trumped theoretical models and led to a paradigm shift in AI.
- These machines can be called “intelligent” because they do actually change their “behaviour” based on millions of examples (“experience”). If done on a large enough scale, machines start to do things that you can’t actually programme them to do – such as complete our sentences or predict our next click on the screen.
- This all looks like highly adaptive behaviour from the machine, but actually the machine has no internal representation of why it does what it does. It’s a kind of statistical trick.
- Even translating text from one language to another (surely one of the currently most impressive tasks) is done through applying statistical techniques to enormous data sets. As a human, I find it bizarre that the machine can do this without any understanding of meaning.
There are potential problems with the rise of AI while our current legal, social and cultural frameworks are not ready for it. For example, AI could take over many jobs – with social and economic consequences which we are unprepared for. AI’s predictive capabilities may be useful for insurance, loans and policing – which may result in implicit and unintended discrimination. What happens as we place increasing dependence on machines like self-driving cars?