Singularity

Singularity
Published: Dec 06, 2012
Standfirst
Liam Desroy examines the end of the world as we know it.
Body

At some point in the future, advancements in technology will change civilisation to such an extent that humanity as we know it will cease to exist. This moment is known as the technological singularity, and according to proponents like David Wood, it will be the most significant event in the history of mankind.

The concept of a technological event horizon has come a long way since its science fiction origins, growing from the pages of Vernor Vinge into a theory of genuine concern amongst some of the world’s leading mathematicians and computer scientists.

But despite this mounting attention from the experts, the subject has continued to remain something of an oddity in the mainstream media. Whilst other potential world-enders, like global warming, frequently hit the front pages, it’s rare to catch even a pixel-sized glimpse of the singularity outside of the usual niche websites. If it really is such a shoe-in to end humanity, why does no one seem to care?

In a recent lecture, Wood – Catalyst and Futurist at mobile operating platform Symbian – identifies the arrival of the singularity via three separate pathways: Firstly, that humans transcend biology, becoming as much part computer as we are human. Secondly, the arrival of super-human general AI – human-created robots so powerful that they supersede us and take control. Or thirdly, simply that we reach such an advanced technological stage that it causes an exponential ‘punch’ of progression which forever transforms civilisation as we know it.

As soon as you hear the language Wood uses, it’s easy to understand why so many people have a hard time buying into it. The singularity’s background in fiction does it no favours and one cannot help associating it with the works of Phillip K. Dick. It’s hard, after all, to envision our own lives entangled in some kind of Matrix-esque scenario. But fiction has often been a medium for prophecy, so perhaps we should briefly suspend such cynical urges. If eminent figures like Nobel Prize-winners, MIT professors and the Research Director at Google are taking the subject seriously – all of whom have spoken at the annual Singularity Summit – surely it’s time the concept of singularity received more widespread attention.

One of the platforms on which singularity theory is based is known as Moore’s law. First outlined in a paper published by Gordon Moore in 1965, Moore’s ‘law’ is the observation that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. Based on evidence between 1958 and 1965, Moore predicted that this trend would continue for at least another ten years. Over half a century on, Moore’s law is still roughly following this trend – a fact that suggests, to singularity theorists at least, that it is only a matter of time before technology surpasses our own intellect and all manner of robot carnage is unleashed upon the world.

But Moore’s ‘law’ is merely a trend, and some, such as Microsoft Co-Founder Paul Allen, have been quick to point out that to look at an isolated period of growth and assume it will continue forever is problematic to say the least. “These ‘laws’ work until they don’t work,” he argued in a piece for Technology Review, co-authored with Mark Greaves. Allen believes that just because a pattern emerges, it does not mean that it has to continue, arguing instead that the increase would eventually plateau.

A common agreement surrounding Moore’s law is that the reason it has stuck so closely to the predicted progression (the actual timescale being closer to 18 months) is that it has been a self-fulfilling prophecy: businesses have set targets adhering to Moore’s law because that is where they expect the relative competition to be doing too. Nonetheless, the exact cause of the rate of increase does seem to be a touch irrelevant: the fact is it is still doubling.

Enter Ray Kurzweil. Kurzweil is a prominent figure amongst singularity theorists, having written numerous books on the subject, and his answer to Allen’s criticism is that Allen is failing to take into account the unknown – that he is not considering the next paradigm. Kurzweil accepts that there are “many predictions that Moore’s law will come to an end,” but that he believes that another form of technology will emerge and continue computer enhancement. After all, he argues, Moore’s Law is, itself, already the fifth paradigm: “Intel and other chip makers are already taking the first steps towards the sixth paradigm.”

What I like about Kurzweil is that, unlike most singularitarians, he has gone out on the limb and stamped a date on the singularity. 2045. That is what he believes to be the “future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” He is confident that his predictions are bang on track.

The key question, for me however, is not so much about technological advancement – developments in my lifetime alone have been almost beyond comprehension – but about how technology will take that step beyond humans when we are the ones developing the technology.

Perhaps the most famous of ‘super’ computers is the cognitive system known as Watson. This IBM initiative shot to fame in 2011 when it challenged two Jeopardy! champions at their own game. Placed onstage between the contestants, this flashing monitor proceeded to trounce the champions and did so in a manner that had the whole world falling for its digital charms.

Whilst normal computers are modelled on ‘keyword and search’ type systems, Watson was designed to work with natural language, allowing it to interpret the complex question structure of Jeopardy! Working with natural language allows Watson to gather knowledge in a similar manner to humans, reading articles such as Wikipedia to glean information.

The Jeopardy! stunt, besides making for some deftly entertaining viewing, also demonstrated to the world that the concept of AI was very much a living, breathing reality. IBM are now pushing for Watson to take an active role in healthcare and financial reform, arguably the largest responsibility to be entrusted to the hands of a computer.

In the 1950s British mathematician Alan Turing portrayed some extreme visions of where cognitive learning systems would take us. When asked whether computers would ever match our own intelligence, he replied “yes, but only very briefly”. His vision was that once a machine was made that could closely replicate the human mind, “it would not take long to outstrip our feeble powers”. Due to what Turing saw as their inevitable ability to converse with each other, upgrade themselves and learn without human help, “we should expect the machines to take control”.

No matter which side of the debate you are on, it seems the majority agree that, if super-human AI is created, it is impossible to then control what will happen – we will have fallen off the map of certainty. At the Wood lecture there was a discussion on what systems could be put in place to prevent this loss of power. However, grim consensus in the room was that, if a computer can learn on its own, then it can always learn “a way around the system”. When computers surpass our intelligence, anything we do would be futile.

You might think that perhaps governments could simply put a halt to technological development at some point, but the struggle for power and the drive of consumerism have arguably left humanity incapable of halting our own developments. Wood tells of a time when he spoke to some American scientists on the subject, asking them why they continued to develop computers when they also believed in the likelihood of a singularity? They replied: “Better American software that’s out of control than Chinese software that’s out of control.”

Already, around the world, groups of researchers have developed AI to a point where it can think on its own, learn on its own and develop on its own. And yet still very few take the idea of singularity seriously.

I recently trialled a new computer programme – still in the early stages of development – in which a mass of particles slowly drifted around the screen. They were based on an algorithm that allowed them to act independently, interact with each and steadily develop through their experiences. Even whilst witnessing this with my own eyes, a large part of my brain still refused to register what was happening. The technology is starting to arrive, but our acceptance of its existence is still a long way off.
 
Image credit: Liam Desroy

Add new comment