Tuesday, 9 June 2015


A Special Report by Bob Bluffield

Those old enough to remember will recall the sensational 1968 Stanley Kubrick film, 2001: A Space Odyssey that was adapted from Arthur C Clarke's short story The Sentinel. The theme revolves around human beings encountering black monoliths that are affecting human evolution. The storyline involves two astronauts on their way to Jupiter on Discovery One, a spacecraft that is under the control of HAL 9000, a computer that has been ordered to withhold vital information about the mission from the astronauts. HAL 9000 eventually breaks down with 'acute emotional crisis' caused by being unable to come to terms with his own fallibility.  Critics have compared the artificial intelligence (AI) of HAL 9000 with the threat AI is already posing to humanity that is being deployed in computers and robots that could be programmed with a superintelligence far superior to that of our own species.

With the knowledge that scientist will be able to develop robots with artificial intelligence that can give machines the capability of reproducing themselves ad infinitum may sound fanciful, but the threat of machines so powerful they could threaten the very existence of the human race is in fact very real and might only be a few decades away. 

In a powerful book published last year, Nick Bostrom an eminent professor at Oxford Martin School, and Director of the Programme on the Impacts of Future Technology at Oxford University, argues that AI is the most important issue the human race has ever faced. In Superintelligence: Paths, Dangers,Strategies Professor Bostrom offers a compelling warning of the dangers. He states how one only has to compare cleverer human brain to the brains of animals to realise that we have capabilities that other creatures lack. In turn, animals are equipped with stronger muscles or sharper teeth and claws that we do not possess. This falls into insignificance when AI machines can be created that surpass the intelligence and strength that man and even the strongest mammals possess. Take the gorilla for example; for the species to survive gorillas depend more on us humans than on other gorillas.  In the same way, if left unchecked, the fate of the human species would depend on the actions of the machine superintelligence. Before you dismiss this as hyperbole, you should first take the time to read Professor Bostrom's book and also consider the warnings being uttered by some of the world's most respected scientists. 

One such is the distinguished theoretical physicist, Professor Stephen Hawking, who has issued a stark warning that if scientists' progress in creating thinking machines, this could lead to a shocking Doomsday situation. Despite his warning, Hawking nevertheless agrees that some forms of primitive artificial intelligence have proved useful. This includes a technology developed by the British company Swift Key (the creators of keyboard software for IPhone and Android) that has developed a system designed to predict what Hawking is thinking by suggesting words that he may want to use next. By typing in around 15-20 per cent of what he wants to say, the software predicts the rest. But, according to Hawking, this technology still comes with the familiar American electronic accent that makes the professor sound robotic. With his usual wicked sense of humour, Hawking jokes that he does not see this as a downside, because it has become his trademark and he "wouldn't change it for a more natural voice with a British accent".

The millionaire founder of Pay Pal, Space X and Tesla electric cars,  Elon Musk, describes the threat as being a real life scenario between Terminator-style robots and mankind that is  "... more dangerous than Nukes". He did not mix his words by describing the creation of artificial intelligence as ... "like summoning the demon", adding: "If I had to guess what the biggest threat to our existence is, it's probably artificial intelligence". He believes fictional depictions of AI such as the lethal computer HAL 9000 in 2001: A Space Odyssey and the robotic child David in Steven Spielberg's 2001 film AI - Artificial Intelligence would be like a "puppy dog" in comparison to the power and threats we are likely to face from real, self aware AI. Musk again: "I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure we don't do something very foolish".

Yet, according to Ray Kurzwell, Google's director of engineering, by 2045 artificial intelligence will be here, and 'mind uploads' will herald immortality in a world of super-human machines. He should know because Google is showing no signs of slowing in its rapid acquisition of companies including those involved at the sharp end of AI. Wikipedia states that Google has bought 'on average more than one company per week since 2010', 178 since February 2001. In the two years up to February of last year, according to CBS News it had spent 'a staggering $17 billion US on acquisitions'. It also has a secretive lab known variously as; Google X Lab, Google X or Google (x) that experiments with ambitious future technologies. There are claims too that Google is secretly working to develop robots that use artificial intelligence to "...make a large, positive impact on society". Sources within the Google hierarchy are alleged to have said the company is aiming to become "AI complete" by producing machines that are as intelligent as a human brain. The company's founders, Larry Page and Sergey Brin, have both expressed positive views on AI and have publicly stated their aim for Google to be "... artificially intelligent so that it understands exactly what information we are seeking so that it can be interfaced directly with our brains." Larry Page was quoted to say: "Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the Web. It would understand exactly what you wanted, and it would give you the right thing. We're nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on". This will come as quite a worrying thought for most of us even though this announcement was not made recently by Page ... but 15-years ago!

And Google are not alone. In February Bloomberg Business suggested that there have been 'a dozen start ups now forming a mini-boom in AI.' The report goes on to say 'After two decades of the field suffering from scant research funding and little corporate attention, a rebirth is being spurred by interest from Google, Facebook Inc, Amazon.com and others, with Alibaba GroupHolding Ltd chairman Jack Ma saying that the Chinese e-commerce company will invest significantly in the area'.
Research into artificial intelligence dates back to the 1960s particularly involving its use in military equipment and ordnance as well as in security systems. If you think the theories of leading scientists on the dangers of AI are still 'pie n the sky' then consider some of the innovations that have already been introduced. These include computers that can beat human beings at chess, driverless cars, eye glasses that provide a head-up display and Samsung televisions within our homes that capture our voice commands and transmit conversations to third parties. 

Dr Stuart Armstrong of Oxford's Institute for the Future of Humanity has said that "Predicting artificial intelligence is hard" but warns "...they might be extremely alien. They might have tastes completely incomprehensible to us". This really is a frightening scenario because it implies that AI programmed machines might turn against us! A similar opinion has been shared by Professor Stephen Hawking who said AI could spell the "...end of the human race" whilst Microsoft founder Bill Gates confirmed that he is "...in the camp that is concerned about super intelligence". He added: "First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decade after that though the intelligence is strong enough to be a concern. I agree with Elon Musk on this and don't understand why people are not concerned". Stephen Hawking described the threat from artificial intelligence by commenting  "...it would take off on its own, and re-design itself at an ever increasing rate ... humans, who are limited by slow biological evolution, couldn't compete, and would be superseded". 

But not everyone agrees. The British creator of Cleverbot, Rollo Carpenter has said "I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realised". Cleverbot software has been developed to learn from its past conversations and during the Turing test*, it deceived people into thinking they were having a conversation with another human being instead of a machine. Carpenter believes we are still a long way from developing algorithms that are needed to create full artificial intelligence but he does agree that it will be with us during the next few decades. If it is not going to destroy human lives the world's authorities must find a way of controlling it to maintain a balance, yet the worrying factor will always be of the devastation the science of AI could cause were the technology to fall into the hands of a rogue state or terrorist organisation. 

* The Turing test is a means of testing a machine's ability to exhibit intelligent behaviour equivalent to or indistinguishable from that of a human.

No comments: