Technology

Musing towards a more hopeful AI future

It is clear we are going to have to deal with artificial intelligence and its various effects on our industry and wider economy.

To label something artificial implies that we know what’s real. What is real intelligence? Do we mean human intelligence? And if we do, does that mean crows, chimps and dolphins are not intelligent? Or is intelligence something bigger and more abstract, such as evolved, cultural intelligence?

This latter thought hints at a difference between the individual and the collective: an ant or a bee may not clear my hurdle for intelligence, but an ant colony and a bee swarm would. Having decided what the real thing is, can we articulate an adequate definition of it?

The label ‘artificial’ may also send us towards an unnecessary dead end. What if the future real thing is some combination of natural and artificial? Or, what if artificial becomes the real thing? We will let that particular thought drop for now.

As for defining intelligence, I defer to David Krakauer, president of the Santa Fe Institute, who says it means “making hard problems easy”.

Krakauer has also defined stupidity as “making easy things hard”. I like these definitions for a number of reasons; they are short and use short words, they apply to both biological neurons and printed circuits and they give wide scope for exploration.

I have already suggested above that culture could be a form of intelligence – by laying down behavioural rules, or norms if you prefer, culture can make hard problems easy by showing us how to behave or exercise choice in a given situation.

So it seems that there are multiple forms of intelligence, not all of which are obvious under casual observation. What about embodied intelligence (morphological computation)? It is possible to build a purely mechanical machine, based on human geometry (with a pelvis and two legs), that will walk on a treadmill, suggesting that evolution has found a design that can perform a sophisticated function without the need for external computation. Or consider the performance of top athletes, who make hard things look effortless. We could call it skill, or we could call it movement intelligence. Essentially, their hours of training can be thought of as creating a set of reflexes that fire with precise timing to achieve the desired result. But movement intelligence may require language intelligence, which is conjecture advanced by John Krakauer (David’s brother) of Johns Hopkins University. The top athlete has a coach providing language-based instruction. There are no videos of monkeys juggling on YouTube, so perhaps the movement intelligence behind juggling can be learned only through language – by being told what to do.

Going back to reflexes, they are by definition involuntary. They are too fast for us to be able to think about them; therefore, they can show us the limits of knowledge. Experiments have shown that you can give subjects the necessary knowledge – that the handlebars of a bicycle have been reversed, for example – but it is of no use to them. Apparently, it still takes months to retrain bike-riding reflexes. All very interesting, but we should get back on track. I would like to return to the thought I dropped earlier, about combining natural intelligence (whatever that is) and artificial intelligence (whatever that is).

If you subscribe to the Pablo Picasso school of thought – “computers are useless, they can only give you answers” – then, in effect, you believe in the cognitive outsourcing model. Under this model, we give computation problems to a computer on the basis that it can perform them faster, cheaper, and probably more accurately than we can, just like in any good outsourcing arrangement. Nothing much of interest is implied by this model. There is no transformative leap in our human intelligence, just some solid productivity improvements. This is possibly why AI can be viewed as threatening. As the machines advance faster than we can, what if they start to know things that we don’t?

An alternative model is available, however – the cognitive transformation model. This states that as we internalise new cognitive technologies, we change the range of thoughts we can think. So computers, under this model, become a medium for expanding and spreading cognitive technologies. AI then becomes less threatening, as it can be viewed as offering us more powerful cognitive technologies that we will internalise in time – giving us more powerful ways of thinking (and allowing us to design more powerful AI, and so on). Now that would be real intelligence.

To me, the cognitive transformation model offers optimism and hope as an alternative to the dark march towards the technological singularity (the point at which machines can design better machines than us and therefore take charge); therefore I would like to believe it’s true. But hope is not a strategy. And the extraordinary pace of development in AI makes understanding intelligence a practical question. The good news is that there are many bright minds studying intelligence in academia. The bad news, David Krakauer says, is that stupidity is the single biggest threat to mankind – and no one is studying that.

 

Tim Hodgson is head of Thinking Ahead Group, an independent research team at Willis Towers Watson and executive to the Thinking Ahead Institute.

 

Join the discussion