Threat from Artificial Intelligence not just Hollywood fanta
Jul 1, 2015 8:44:00 GMT 10
brillbilly likes this
Post by theshee on Jul 1, 2015 8:44:00 GMT 10
From the dystopian writings of Aldous Huxley and HG Wells to the sinister and apocalyptic vision of modern Hollywood blockbusters, the rise of the machines has long terrified mankind.
But it now seems that the brave new world of science-fiction could become all too real.
An Oxford academic is warning that humanity runs the risk of creating super intelligent computers that eventually destroy us all, even when specifically instructed not to harm people.
Dr Stuart Armstrong, of the Future of Humanity Institute at Oxford University, has predicted a future where machines run by artificial intelligence become so indispensable in human lives they eventually make us redundant and take over.
And he says his alarming vision could happen as soon as the next few decades.
Dr Armstrong said: "Humans steer the future not because we're the strongest or the fastest, but because we're the smartest.
"When machines become smarter than humans, we'll be handing them the steering wheel."
He spoke as films and TV dramas such as Channel 4's Humans and Ex-Machina, - which both explore the blurred lines between man and robot - have once again tapped into man's fear of creating a machine that will eventually come to dominate him.
Dr Armstrong envisages machines capable of harnessing such large amounts of computing power, and at speeds inconceivable to the human brain, that they will eventually create global networks with each other - communicating without human interference.
It is at that point that what is called Artificial General Intelligence (AGI) - in contrast to computers that carry out specific, limited, tasks, such as driverless cars - will be able to take over entire transport systems, national economies, financial markets, healthcare systems and product distribution.
"Anything you can imagine the human race doing over the next 100 years there's the possibility AGI will do very, very fast," he said.
But while handing over mundane tasks to machines may initially appear attractive, it contains within it the seeds of our own destruction.
In attempting to limit the powers of such super AGIs mankind could unwittingly be signing its own death warrant.
Indeed, Dr Armstrong warns that the seemingly benign instruction to an AGI to "prevent human suffering", could logically be interpreted by a super computer as "kill all humans", thereby ending suffering all together.
Furthermore, an instruction such as "keep humans safe and happy", could be translated by the remorseless digital logic of a machine as "entomb everyone in concrete coffins on heroin drips".
While that may sound far fetched, Dr Armstrong says the risk is not so low that it can be ignored.
"There is a risk of this kind of pernicious behaviour by a AI," he said, pointing out that the nuances of human language make it all too easily liable to misinterpretation by a computer. "You can give AI controls, and it will be under the controls it was given. But these may not be the controls that were meant."
Dr Armstrong, who was speaking at a debate on artificial intelligence organised in London by the technology research firm Gartner, warns that it will be difficult to tell whether a machine is developing in a benign or deadly direction.
He says an AI would always appear to act in a way that was beneficial to humanity, making itself useful and indispensable - much like the iPhone's Siri, which answers questions and performs simple organisational tasks - until the moment it could logically take over all functions.
"As AIs get more powerful anything that is solvable by cognitive processes, such as ill health, cancer, depression, boredom, becomes solvable," he says. "And we are almost at the point of generating an AI that is as intelligent as humans."
Dr Armstrong says mankind is now involved in a race to create 'safe AI' before it is too late.
"Plans for safe AI must be developed before the first dangerous AI is created," he writes in his book Smarter Than Us: The Rise of Machine Intelligence. "The software industry is worth many billions of dollars, and much effort is being devoted to new AI technologies. "Plans to slow down this rate of development seem unrealistic. So we have to race toward the distant destination of safe AI and get there fast, outrunning the progress of the computer industry."
One solution to the dangers of untrammelled AI suggested by industry experts and researchers is to teach super computers a moral code.
Unfortunately, Dr Armstrong points out, mankind has spent thousands of years debating morality and ethical behaviour without coming up with a simple set of instructions applicable in all circumstances which it can follow.
Imagine then, the difficulty in teaching a machine to make subtle distinctions between right and wrong.
"Humans are very hard to learn moral behaviour from," he says. "They would make very bad role models for AIs." link
But it now seems that the brave new world of science-fiction could become all too real.
An Oxford academic is warning that humanity runs the risk of creating super intelligent computers that eventually destroy us all, even when specifically instructed not to harm people.
Dr Stuart Armstrong, of the Future of Humanity Institute at Oxford University, has predicted a future where machines run by artificial intelligence become so indispensable in human lives they eventually make us redundant and take over.
And he says his alarming vision could happen as soon as the next few decades.
Dr Armstrong said: "Humans steer the future not because we're the strongest or the fastest, but because we're the smartest.
"When machines become smarter than humans, we'll be handing them the steering wheel."
He spoke as films and TV dramas such as Channel 4's Humans and Ex-Machina, - which both explore the blurred lines between man and robot - have once again tapped into man's fear of creating a machine that will eventually come to dominate him.
Dr Armstrong envisages machines capable of harnessing such large amounts of computing power, and at speeds inconceivable to the human brain, that they will eventually create global networks with each other - communicating without human interference.
It is at that point that what is called Artificial General Intelligence (AGI) - in contrast to computers that carry out specific, limited, tasks, such as driverless cars - will be able to take over entire transport systems, national economies, financial markets, healthcare systems and product distribution.
"Anything you can imagine the human race doing over the next 100 years there's the possibility AGI will do very, very fast," he said.
But while handing over mundane tasks to machines may initially appear attractive, it contains within it the seeds of our own destruction.
In attempting to limit the powers of such super AGIs mankind could unwittingly be signing its own death warrant.
Indeed, Dr Armstrong warns that the seemingly benign instruction to an AGI to "prevent human suffering", could logically be interpreted by a super computer as "kill all humans", thereby ending suffering all together.
Furthermore, an instruction such as "keep humans safe and happy", could be translated by the remorseless digital logic of a machine as "entomb everyone in concrete coffins on heroin drips".
While that may sound far fetched, Dr Armstrong says the risk is not so low that it can be ignored.
"There is a risk of this kind of pernicious behaviour by a AI," he said, pointing out that the nuances of human language make it all too easily liable to misinterpretation by a computer. "You can give AI controls, and it will be under the controls it was given. But these may not be the controls that were meant."
Dr Armstrong, who was speaking at a debate on artificial intelligence organised in London by the technology research firm Gartner, warns that it will be difficult to tell whether a machine is developing in a benign or deadly direction.
He says an AI would always appear to act in a way that was beneficial to humanity, making itself useful and indispensable - much like the iPhone's Siri, which answers questions and performs simple organisational tasks - until the moment it could logically take over all functions.
"As AIs get more powerful anything that is solvable by cognitive processes, such as ill health, cancer, depression, boredom, becomes solvable," he says. "And we are almost at the point of generating an AI that is as intelligent as humans."
Dr Armstrong says mankind is now involved in a race to create 'safe AI' before it is too late.
"Plans for safe AI must be developed before the first dangerous AI is created," he writes in his book Smarter Than Us: The Rise of Machine Intelligence. "The software industry is worth many billions of dollars, and much effort is being devoted to new AI technologies. "Plans to slow down this rate of development seem unrealistic. So we have to race toward the distant destination of safe AI and get there fast, outrunning the progress of the computer industry."
One solution to the dangers of untrammelled AI suggested by industry experts and researchers is to teach super computers a moral code.
Unfortunately, Dr Armstrong points out, mankind has spent thousands of years debating morality and ethical behaviour without coming up with a simple set of instructions applicable in all circumstances which it can follow.
Imagine then, the difficulty in teaching a machine to make subtle distinctions between right and wrong.
"Humans are very hard to learn moral behaviour from," he says. "They would make very bad role models for AIs." link