Look at this computer, phone or tablet, with its big, stupid screen – will it ever be able to do what you’re doing now? Will it be able to pretend to be working while reading blog posts?
Lee Se-dol has just retired from his job, despite being really, really good at it. As the South Korean champion of the ridiculously complex board game Go says, ‘Even if I become the number one, there is an entity that cannot be defeated.’ So is it time for you to throw in the towel too? Will this silly machine take your job?
I remember listening to my grandad lamenting the rise of technology and how, one day, all letters would be delivered by robots. As a retired Royal Mail employee, he was convinced that the days of the humble human postie were numbered, and that letters would soon be in the hands of street-roaming androids.
This was around the mid-nineties; somebody had just found a use for the ancient but hitherto redundant @ key, and we were all learning to surf the information-super-highway on our multimedia PCs. Silly old grandad! There would be no mail at all! Robots represented an old-fashioned view of the future; soon there would be paperless offices for those few people who weren’t working from home. Physical deliveries were dead.
Well, that didn’t happen. While the number of letters being delivered did decline, parcels weren’t going anywhere. OK, yes, dear reader, you clever clogs – parcels were going to lots of places. You know what I mean. The point is, a young man called Jeff had just established a little online bookshop and had plans to grow it into a behemoth that would deliver everything you’d ever buy. And Amazon are doing all within their enormous power to replace the extant human workforce with robots. Turns out Grandad had it pretty much spot on. He misunderstood the future so badly that he ended up being right, in a stopped-clock-right-twice-a-day kind of way.
He wasn’t saying anything new, of course; people have been worried about machines taking away human jobs since the Luddites, those angry nineteenth century Yorkshire folk who vowed to smash the machines that would replace them. Since about that time, others have been arguing that new tech doesn’t mean fewer jobs, it just means different jobs. So far that seems to have been borne out, despite the fact that machines are, surely, supposed to give us more time.
British economist John Maynard Keynes famously forecasted, back in 1930, that we would all be working fifteen hour weeks a century later. 2030 isn’t too far away and Keynes’ prediction looks unlikely, so perhaps the Luddites really had nothing to worry about. It seems that Keynes assumed that our increased productivity would be used to do the same amount of work in less time whereas, actually, we’ve chosen to work just as many hours, and make more stuff instead.
We could argue all day about which choice was the right one but that would be moot because, at the level of the individual, there was no choice; we decided collectively that we would work as hard as possible to obtain as much stuff as possible. Again, there is scope for a debate about how much we were manipulated into that by marketers, but that scope is beyond this post. We’re lying in the bed that we made. The question is: will the current trend persist? Will we continue to find new forms of employment that would be as unimaginable now as SEO Specialist was in the 1980s?
According to PwC, there will be three waves of automation and 30% of jobs will be affected by the mid 2030s. Their report seems well researched and isn’t making any particularly wild claims but, by the same token, it’s not saying anything that interesting either: a lot of existing jobs will be automated away and there will be new ones in their place. The change that we have seen since the invention of the steam engine will continue. By changing, everything stays the same.
But can that really last? I agree with PwC that there are three waves of automation but I don’t agree about the timeline nor the identity of those waves. I would characterise the situation like this:
- Phase 1 (1750–1998): Some physical processes automated (some factory work, lighthouses, streetlights)
- Phase 2 (1946-): Information automated
- Phase 3 (1997???-) Intelligence automated
Phase 3 is a little complicated because, by some accounts, we’re already in it. The first completely autonomous car is around the corner, we’re told, and will be parking in the driveway any minute. This is the era of Artificial Intelligence and toasters are about to start talking.
The next major event will be the singularity, brought about by an intelligence equal to our own, yet able to replicate itself and evolve arbitrarily quickly, thereby spawning an army of exponentially intelligent robots, an occurrence which Stephen Hawking described as ‘either the best or worst thing’ for humanity. So when is the singularity due?
Nobody knows. The problem is that intelligence – real human intelligence – is incredibly complicated and we really don’t give ourselves enough credit for how mind-blowingly amazing we are. We are impressed by the things computers do because they are so much better at certain types of tasks than us. These things don’t require intelligence though, just perfect memory and the ability to move electrons around in a programmable and predictable way. There are only two types of entity that can process information in a way that makes sense to us: humans and computers and, because we use our brains to do those things, we intuit that computers must be doing it with something like a brain as well. This, sadly, isn’t the case.
In their book Rebooting AI: Building Artificial Intelligence We Can Trust, Gary Marcus and Ernest Davis analyse the current state of AI and show that, far from being just around the corner, computers with capabilities akin to human-style thought will never be possible if research continues in its current form. The problem is, they argue, that the field has got too hyped-up over ‘Big Data’ which is only a tiny aspect of the challenge. Because of the internet, Google, social media and all the various other sorts of connectivity, algorithms now have access to enormous datasets which previously weren’t available; this has resulted in some truly impressive feats, but these are really just, in the words of the authors, ‘parlour tricks’ rather than genuine intelligence.
To get an example of what they mean, let’s think about Google Translate. While it’s difficult to understand exactly how Google works its magic, we do know that it is based on ‘bitexts’. These are pairs of the same text in two different languages. Imagine, for example, an English book that has been translated into Spanish. In the English text, we see the phrase, ‘There is an apple in the fridge’. The Spanish translation, in the same part of the book, would read, ‘Hay una manzana en la nevera’. Type one of those sentences into Google and the other one will pop out, as if by intelligence. And, well, who’s to say that isn’t intelligence? Isn’t that pretty much the same process as that which goes on in the mind of a bilingual human when they translate that phrase? Not really; it’s a tiny part of what happens, but there is so much more and, as I said earlier, the sheer amount of processing that the human brain will do with a phrase like that is mind-blowing.
If you say to someone, apropos of nothing, ‘There is an apple in the fridge’, in a context in which there is no obvious reason for you to say such a thing, that person will be confused and will probably politely ask you what you are talking about. The first thing they’ll think is something like, ‘Why is this person saying this to me? What is their motive? Perhaps they think they are talking to somebody else. There is no fridge, and if there were, there probably wouldn’t be an apple in it.’ Your poor, confused acquaintance will just walk away. When humans talk to each other, we communicate with regard to context; there is agency and thus motives behind our utterances. We use empathy to ensure that our messages are likely to be understood. When we hear the words of others, those words carry meaning, and that meaning is shaped by a lifetime of experience and our brains’ uncanny capacity for forming connections between everything in the world, by our imaginations, by our almost intuitive understanding of cause and effect, by our empathy and various other faculties.
What is an apple, anyway? It means so many things, depending on the words and context around it. It’s a fruit, which has a biological definition but, to me – probably you too – it’s a vaguely healthy, sweet thing that you can eat. It’s juicy, which means it contains sugary liquid, but not in the same way that a bottle of maple syrup does. Apples can be found in orchards, supermarkets and lunch boxes. Apples are crunchy, which is deeply satisfying, but my teeth squeak when I bite into them, which I find…distressing in some way. A princess ate a poisoned one in that fairy tale – which one was it? Snow White, I think.
How would a computer understand this very human conception of ‘apple’? Even if a computer comprehends the physical structure that gives an apple its crunchiness, would it associate it with the similar feeling of stepping on newly fallen snow? What would a computer do if you told it that there was an apple in the fridge? Into which variable of which program would it slot that information?
It’s not impossible for a computer to learn all of these connections, forged through human experience, it’s just extremely complex. If you ever dabble in programming (assuming it’s not something you do anyway), you’ll be amazed at how specific your instructions have to be for the computer to be able to handle them; ‘Just get me an apple, or something’ is a distant dream, and that’s a direct instruction – something computers are relatively adept with. When that does happen, what next? Well, as Pete says in an old episode of Friends:
So, you know, that’s why within a few years, voice recognition is gonna be pretty much standard on any computer you buy. So you can be, like, “Wash my car”, “Clean my room”. You know it’s not gonna be able to do any of those things, but it’ll understand what you’re saying.
That episode aired in 1997 and, rather than being a few years off, even the understanding part isn’t quite there yet, over twenty years later.
Inventor and futurist Ray Kurzweil predicts that computers will have human levels of intelligence by 2029, and that the singularity will be with us by 2045, so why the disparity? Why does Kurzweil believe this will happen when Marcus and Davis don’t?
It appears that the argument is partly based on which type of artificial intelligence we’re talking about. Kurzweil is extrapolating from the enormous progress we’ve made in the last few decades. Back in 2014, he wrote in Time Magazine that, ‘A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago.’ While this is true, and astounding, it is not really on the same path of progress that would lead us to human-like intelligence. We haven’t even really begun to travel down that path, the path towards AGI (Artificial General Intelligence), argue Marcus and Davis:
The central problem, in a word: current AI is narrow; it works for particular tasks that it is programmed for, provided that what it encounters isn’t too different from what it has experienced before. That’s fine for a board game like Go – the rules haven’t changed in 2,500 years – but less promising in most real-world situations.
The problem is that engineers are getting very good at extremely narrow forms of AI, but nowhere near the broad, general intelligence we humans have.
I’m not a futurist, and I won’t hazard a guess about when we’ll see AGI, but I do think Marcus and Davis make a lot of sense. We may get AGI, but not by making computers better at Go. I don’t think that means that Kurzweil is mistaken though, but that, for his prediction to come true, we’ll have to start researching in a completely different direction. That, by the way, is the intention of the authors of Rebooting AI: that we start researching what makes up that most elusive of things we call, almost arrogantly, common sense.
So where does this leave you and your job? Should you, like Lee, bow out now? Well, if Gary Marcus and Ernest Davis are right, the answer is an emphatic ‘no’; we haven’t rolled our sleeves up to even start understanding common sense yet, not in any meaningful way. I hope they are right, too, because I envision a beautiful partnership between humans and machines and, if computers get too clever, too soon, we’ll never be able to live it. I see a glorious, new, golden age of art.
I Don’t Know Much About Art
I’ve often wondered what art is and, while I’m not absolutely sure, I think I have an acceptable working definition that will do for now:
Art is a glimpse into the inner life of another person – a celebration of all that it means to be human.
I hope that doesn’t sound too pretentious, because I think there really might be something to it.
The art form that I most identify with, that moves me the most, is music. Quite how music works, I don’t know, but there is definitely an element of pattern recognition, and our subconscious understanding of maths going on. When we hear two notes in harmony, we are hearing the interlacing of two mathematically related sound waves. The brain notices that pattern, and the beauty of it means something to us.
Similarly, when we read a poem, the recognition of connections between two apparently unrelated objects, memories or feelings, means something. The links can be strengthened by the sounds of the words, the stress of the lines, and by rhetorical devices that bypass the conscious brain and speak directly to the subconscious.
The ability to instantly recognise patterns, see connections that are almost impossible for the conscious mind to grasp, to understand the motives of others, to fathom long chains of cause and effect – all of these things are unique to humans. They evolved to help us survive, adapt, procreate and thrive, and they are incredibly powerful tools. When they’re not involved in the serious business of surviving and thriving, humans play with these faculties; humans create art.
Art is how we reveal the inner workings of the human mind and see into the minds of others. If animals made art, it would be totally different, because the inner life of other species is totally different. If we can be freed from the mental drudgery that computers deal with so well, we can focus on doing the stuff that only humans do well. That will be, if you can forgive my idealism, a wonderful time to be alive.
I see a future, though I can scarcely begin to imagine the specifics now, in which we humans can do our jobs in partnership with machines. They can take care of the remembering, the writing down, the coordinating, while we take care of everything else. If your job involves the slightest hint of creativity, of empathy, of that most elusive thing, common sense, you already create art. Machines could free you to take your work to previously unimagined heights.
In almost every job, there is art, and art is what humans do best. People have always invented tools to help us do the things we’re no good at: hunting, ploughing, finding websites about underwater baseball. We’ve just used the time that our tools have liberated to do the things we are good at: drawing on cave walls, forming bands and putting pictures on lattes. In the future that I think lays before us, thanks to our friends the machines, every one of us is an artist.