The point at which machine intelligence surpasses human problem solving abilities is often referred to as the ‘technological singularity’ (because like a black hole in space, the point is one beyond which we can gather no meaningful information about the future – who can guess what super-intelligence will bring or where it will lead?).
Some of the world’s greatest brains including Professor Stephen Hawking have warned that the creation of strong artificial-intelligence (AI) may become humanity’s greatest achievement but, he suggests, it may also be our last achievement.
But despite all of this publicity and the grave warnings few people – and no governments – are exercised about the coming of super-intelligent beings on this planet. There are no United Nations panels or committees studying the subject, there are no political parties promising to stop the rise of the machines and there are no social movements like modern-day Luddites which are dedicated to preventing this sort of machine-led future happening.
Almost 30 years ago I asked in one of my books whether super-intelligent machines will become our slaves or our masters, I asked whether they would become our ‘companions on Earth’ or whether they would be our successor species on this planet.
Although the topic is always a hot subject for discussion when I lecture to educated and informed business and academic audiences, amongst the general public these ideas and questions produce only bewilderment, bafflement and, inevitably, derision.
There is a metaphor which I find helps humans consider this topic:
Imagine that a couple of years ago The United Nations, the U.S. Government and the Chinese leadership had jointly announced that radio signals had been received on Earth that appeared to come from an alien civilization located in a planetary system only 30 light years away from Earth. These radio signals had reached earth and were intelligible because they were written and spoken in 20 of our world’s major languages.
After exhaustive investigation the authorities had concluded that these radio signals were genuine and had, indeed, reached Earth from a point in a fairly proximate star system that contains suitable earth-like planets capable of supporting life (exoplanets).
The radio signals contained a greeting and the information that, having now received accidental radio transmissions from planet Earth, the beings from the nearby planetary system had dispatched an expedition to visit us. The signals revealed that the aliens expected to arrive at planet Earth in January 2051. The final part of the message (as received) read: “We come in peace.”
HOW WOULD THE WORLD HAVE REACTED?
How would the public have felt? Would some scientists and politicians be warning us that if these aliens are able to travel at close to light speed to visit us their technology must be far, far ahead of ours? Would they be warning us that their peaceful intentions should not be taken at face value?
Would there be United Nations committees and panels established to consider how best to welcome (or repel) these alien visitors? Would governments be frantically examining their weaponry to see how best they might deter or fight the aliens if they turned out to be hostile?
You bet! All of these things would be happening and more. It would be THE subject of the moment which just wouldn’t go away.
And yet this is precisely what the arrival of super-intelligent machines means for our species. It means the arrival of an alien intelligence in our midst. A visitation that, if allowed to go ahead without control, will quickly outstrip all human capability, one which will self-reproduce and one which has the potential to become our successor species.
But how could world development of super-artificial-intelligence be controlled? Would it require the equivalent of the Nuclear Weapons Non-Proliferation Treaty? But would nations sign up to such a treaty? After all, the development of strong AI promises enormous riches as superior intelligence and ubiquitous, versatile robots start to create wealth from machine labour and AI-driven innovation.
This question of how to control strong AI is, by far, the most important issue humankind has ever faced.