The end of the human race may occur as a result of several feasible scenarios, and that’s not even the saddest part. What if those scenarios are not independent of our will? If a collision with an asteroid or getting obliterated by energy bursts from a quasar couldn’t be anyone’s fault, we have no excuse if Artificial Intelligence will kill us all one day – because it ironically represents humanity’s own invention.
The critics of the idea will invoke that humans would program highly intelligent robots always to obey humans, and let’s hope they’re right. But we shouldn’t forget that humans are different and are frequently getting into conflicts. Therefore robots may do the same to defend their owners and eventually attack people.
Dr. Russell’s new book warns us about AI
A computer scientist and distinguished AI researcher from UC Berkeley, named Stuart Russell, warns us about the fact that a big issue in the “standard model” of AI could turn out to be lethal for the entire humanity. He wrote this conclusion and other reasons why AI is so dangerous, along with careful explanations, in his recently launched book called “Human Compatible: Artificial Intelligence and the Problem of Control”. The book was also written for those wanting to understand the history of the development of AI.
When Dr. Russell has been asked who should read his book, the answer was:
“I think everyone, because everyone is going to be affected by this. As progress occurs towards human level (AI), each big step is going to magnify the impact by another factor of 10, or another factor of 100. Everyone’s life is going to be radically affected by this. People need to understand it. More specifically, it would be policymakers, the people who run the large companies like Google and Amazon, and people in AI, related disciplines, like control theory, cognitive science and so on.
My basic view was so much of this debate is going on without any understanding of what AI is. It’s just this magic potion that will make things intelligent. And in these debates, people don’t understand the building blocks, how it fits together, how it works, how you make an intelligent system. So chapter two (of Human Compatible was) sort of mammoth and some people said, “Oh, this is too much to get through and others said, “No, you absolutely have to keep it.” So I compromised and put the pedagogical stuff in the appendices.”
However, the good news is that we are still pretty far from a possible apocalyptic turn of events caused by AI, and we should be able to stop it. But first of all, we need to fully understand how AI works, to possess as much knowledge as possible. Because knowledge is power.