Artificial Intelligence – Are you a Singularity Believer or Sceptic?
By andy
Alan Turing’s 1950 paper (in which he proposed the Turing test) was primarily intended as a manifesto for AI. It asked key questions about the level of data processing required in intelligence (perception, empathic behaviours, language and learning).
But how do we come to a personal opinion or conclusion over the proposition of machines achieving human-level intelligence? Have you, the reader, ever considered that machines may exceed this point? Scientists and academics call this point “the Singularity” – the point in time where machines are more intelligent than humans, and fully implement their artificial general intelligence (AGI).
A definition of AGI is provided by Peter Voss in Ben Goertzel and Cassio Pennachin’s Compendium of Cognitive Technologies (2007): –
General Intelligence comprises the essential, domain-independent skills necessary for machines to acquiring a wide range of domain-specific knowledge (data and skills) – i.e. the ability to learn anything (in principle). More specifically, this learning ability needs to be autonomous, goal-directed, and highly adaptive.
I attended a conference several weeks ago and asked the expert panel when we could achieve the Singularity. In 50, 100, 1000 years’ time or never? The panel couldn’t agree, and in fact few of them had an opinion above “we don’t know” which is unrealistic and shows limited ambition. The notion of the Singularity is so contentious that experts cannot seem to agree whether it will happen, or how it will happen. Sounds familiar? This notion could easily be applied to the Brexit negotiations.
Expert Predictions
I believe that although there is no obstacle in principle to the Singularity, the question we must ask is whether it will be implemented. AI (like Brexit) seems less promising than many technologists (and likewise politicians) assume, or at the very least, hope for. Media reports repeatedly suggest the advance of AI is at an unimaginable rate, well exceeding Moore’s Law. Gordon Moore, who founded Intel, observed that computing power available for $1, doubles each year. However, Vernor Vinge and Ray Kurtzweil insist that our expectations for AI (built on past experience) are near worthless (potentially like Brexit). We should understand that the laws of physics will conquer Moore’s Law eventually, but not (as Margaret Boden suggests) in the foreseeable future.
Add the immense richness and flexibility of the human brain, the need for excellent psychological/computational/philosophical theories about how they work, and the prospects for human-level artificial general intelligence look very poor indeed.
Consequences of Future Research Funding
I doubt whether global Governments will ever find the necessary research funding to get us to the Singularity (GBP 75m in this Autumn’s Budget will be swallowed up just by electricity consumption across our UK research facilities in under 12 months). Governments are putting generous resources into brain emulation, but the finances required for building artificial human minds would be exponentially greater still. On that note, I’d be interested to hear from anyone who can put a figure on the costs of achieving AGI.
Boden suggests that thanks to Moore’s Law, further significant AI advances can be expected. But increases in computing power and data availability won’t guarantee human-like AI. There are Singularity believers and elements of the media that ignore the limitations of AI. They don’t care because they have a notion that exponential technological advances are rewriting the rules for AI.
I am minded to be a Singularity sceptic because time predictions and the state of AI now (see my comments above) gives force to the hypotheses that AGI will never happen, rather than wild speculation that it will. But NEVER is a long time, so sceptics like myself, Boden, Vinge and Kurtzweil may be wrong.
Conclusion
To understand the path to AGI, we need detailed computational theories about psychological processes as well as neuroscientific data about the brain. These have yet to evolve, so I have to conclude that artificial replication of the human brain as a path to understanding and designing AGI, and our work to reach the Singularity, is likely to fail. Similarly, we don’t yet understand the human brain sufficiently to fully emulate it to the level required for artificial superhuman intelligence. This is the next step beyond AGI.
The flip side is that research of this nature may help advance neurosciences and it may help AI technologists to develop further practical applications. But it is an illusion to think that by the end of the 21st Century we will have fully explained what human intelligence actually is.