Advances in the field of Artificial Intelligence (AI) have seen the technology become increasingly incorporated into, virtually all, software we interact with. Computers have become so powerful that it is believed that they may soon match, or surpass, human thinking. This has caused computer scientists, physicists, and philosophers alike, to debate whether or not a machine could ever be truly conscious.
Alan Turing, inventor of the first basic computer, once posited that a machine could only truly be intelligent when it can’t be distinguished from a human. Owing to recent developments in AI, that moment could be at hand. Now, we are beginning to wonder whether or not an artificial intelligence could develop self awareness, a consciousness, so to speak.
If machines can outthink humans, then machines could think, plan, frame purposes for decisions, offer insight based on experience and memory in conversations, and have relationships. An interesting theory, but is it possible?
With enough advancement in neuroscience and our understanding of the human brain, one could simulate the functions of the human brain. It is important, however, to remember that even if one maps out the entire human brain and creates a digital simulation of its inner workings, one cannot reproduce the brain itself. A series of ones and zeros does not, a mind, make.
There are two schools of thought on the matter, some experts in the field believe it is only a matter of mapping the brain and understanding the function of neural networks, then developing a digital simulation.
Experts believe that AI in its current form, cannot achieve the functions of the mind, that is to have a subject perspective and be self aware. They term it ‘Weak AI’ and reckon it would take a ‘Strong AI’ to achieve true consciousness. Current AI could simulate and probably outperform the computational functions of the brain, calculation and pattern recognition, but not have a personal opinion.
“..such machines, though helpless by themselves, may be used by a human being or a block of human beings to increase their control over the rest of the race or that political leaders may attempt to control their populations by means not of machines themselves but through political techniques as narrow and indifferent to human possibility as if they had, in fact, been conceived mechanically.” – Norbert Wiener, in his book The Human Use of Human, 1950
So an AI instigated apocalypse may not be on the cards for humanity as Elon Musk and Steven Hawking have warned. Instead humanity stands to have AI complement and augment how we interact with the world around us and govern our lives. We may be more in danger of becoming inextricably dependent on AI technology, as opposed to being wiped out by it.
This probably means that there is a real danger of individuals or groups harnessing the technology to manipulate processes and group decisions for their own ends.
Stephen Bleher, in an article for Becoming Human that this might not be a failure, by machines, to achieve consciousness. Instead, he offers that machines might develop, or have their own kind of consciousness. He challenges humanity to come to a better understanding of what consciousness is, and suggests that AI might help humanity get to some of those answers.