David Barnhizer
Although it might be more accurate to revise it to “Homo Demonicus” from the perspective of what its existence means for humanity, Yuval Noah Harari writes of the eventual emergence of “Homo Deus”—an essentially god-like AI system. Take a moment to consider the situation not from a willful and self-laudatory humanity but through the “eyes” of an autonomous AI system with unfettered access to all the records of humanity’s history and behavior.
Think about the implications of a fully rational and aware “being” brought into existence without any grounding in biology, family, upbringing, and maturation, etc. and the ability to scan and internalize that history almost instantaneously. Then add in the fact that the created “entity” will have great power, access to an enormous amount of information, and a clear “sight” without the shades of “gray” ambiguities and trade offs a human deals with on a “hit or miss” basis.
If we create or “give birth” to AI/robotics systems that learn by themselves, have access to all information, can connect with billions of other robotic units, control the systems of production and service, surveillance, weaponry and more on which we depend for our very existence, and those AI systems are based on achieving a form of sentience loosely designed on a human template, then we very likely will have released demons into our world. Thus, from a human perspective, Homo Deus could very possibly be Homo Demonicus for the simple reason the AI will know us too well and find us unworthy. Nothing in human history indicates we should give such power to anything. A great deal suggests strongly we should do everything possible to avoid allowing an inhuman source with such power to develop. If there is truth in that idea, the problem is that we are coming closer to the emergence of such “beings” by the day.
In 2018 MIT researchers reported on their deliberate creation of a “psychopathic” AI system they named “Norman” after the Norman Bates character in Alfred Hitchcock’s frightening movie Psycho. They did this by continuously feeding the AI program a steady diet of murder and other terrible actions done by humans. Then they administered a Rorschach Test to determine the AI program’s perceptions compared with those of “normal” humans when shown specific test images. The result was that they had created an AI monster. http://norman-ai.mit.edu/. “NORMAN: World’s First Psychopath AI”. The researchers provided a link for anyone who wants to look more closely at “Norman’s” Rorschach responses compared with humans. http://fortune.com/2018/06/07/mit-psychopath-ai-norman/. “MIT Scientists Create 'Psychopath' AI Named Norman”, Fortune, Carson Kessler, 6/7/18.
Regardless of the limits and protective barriers we attempt to instill in our most advanced AI systems as a matter of self-protection, any checks and balances we install would be nothing more than incomplete constructs that a self-aware, self-reprogramming and ultimately greater than human intelligence system could effortlessly evade. We don’t even understand ourselves well enough to be able to effectively identify, quantify, model and program the best attributes of being human into an artificial being possessed of frightening powers.
Even applying what we consider the highest levels of human analysis and inventiveness, we don’t have anything close to a complete understanding of what comprises human intelligence. There are radically different levels, qualities and capabilities related to the qualities of human intelligence. We are even more inept at understanding issues of morality, emotions and ethics both as individuals and collectives. We have a very limited understanding of how intelligence interacts with emotions, goal setting, and acceptance of ethical and moral limits on our actions, not to mention the roots of fanaticism and ideological fantasies.
Ethics and morality are inherently ambiguous with many contradictions and gray areas. The best course of action is not always obvious, or involves a choice between two “bads” rather than good and evil. For example, instruct an AI system to design and implement a program that ensures the integrity and sustainability of the Earth as an ecosystem by removing the most critical threats. Seems simple and obvious and most would agree the goal is desirable. But the easiest answer could well be that the strategy involves removing 80-90 percent of humans from the planet or, to be safe, wiping out the entire human race in order to carry out the instructions.
An intriguing question involves the implications of self-learning by AI systems. Even if we successfully program self-learning AI systems at the beginning, does their learning potential include the ability to acquire added knowledge and insights from “The Cloud” and other information storage systems by way of linked communication systems that the AI “brains” develop either through our programming or on their own? I suspect this information acquisition about human reality is not a small matter and it becomes quite significant if AI systems learn how to reprogram themselves.
Humans “Talk” Ethics and Morality Far Better Than We “Walk” Ethics and Morality
As to imbuing AI systems with ethics, emotions and decision-making power in “gray areas”, we humans have constant problems with our own moral and ethical dilemmas and have still failed to “get it right” after millennia. It is delusional to think we are capable of resolving such issues for AI systems since we don’t even know how to be consistently ethical or moral for ourselves. https://www.theguardian.com/science/2017/jul/19/give-robots-an-ethical-black-box-to-track-and-explain-decisions-say-scientists. “Give robots an 'ethical black box' to track and explain decisions, say scientists: As robots start to enter public spaces and work alongside humans, the need for safety measures has become more pressing, argue academics”, Ian Sample, 7/19/17.
Ray Kurzweil, Elon Musk and others who talk in terms of humans as cyborgs apparently fail to understand how jealous, rapacious, duplicitous, power hungry, crazy, vicious and just out-and-out mean, people can be. The human population may have its share of saints—many of them martyred--but it also has far too many demons. Granting expanded powers through the control of AI to such individuals is a terrible idea that makes it likely that the “demons” among us will get worse.
Although reports such as the following may well be overstated, the fact is that there is ongoing research into how to “join” humans with computerized capabilities through connections and implants. See, http://www.dailymail.co.uk/sciencetech/article-4683264/US-military-reveals-funding-Matrix-projects.html. “US military reveals $65m funding for 'Matrix' projects to plug human brains directly into a computer: System could be used to give soldiers 'supersenses' and boost brainpower”, Mark Prigg, 7/10/17
A fully developed and self-aware AI system would almost surely have access to all information in “The Cloud”, including the “Dark Net”. Given the vileness of what the Internet is doing to our culture, not to mention the continual exposure of the worst aspects of human nature on the “Dark Web” or “Dark Net” that serves as a linking and trading system for some of the most disgusting aspects of humanity, we simply have no way to understand how a diet of depravity could influence the orientation and development of the systems we are attempting to design. MIT’s experience with Norman could, however, provide some useful insight. If an AI system learns from the experiences and data to which it has access, this has important implications for how such systems will evaluate and respond to humans.
A recent report focused on how AI systems such as ChatGPT are being fed enormous amounts of data from an incredible range of sources. These sources present a flood of information that no human is capable of adequately structuring or filtering. This means we have already witnessed self learning adaptations by AI systems in which the initial human creators admit they do not understand what really occurred as the AI program implemented unanticipated adaptations and shifts. A few details on the massive data-feed input strategy are offered by the following June 2024 Associated Press report.
A new study released by research group Epoch AI projects that tech companies will exhaust the supply of publicly available training data for AI language models by sometime between 2026 and 2032.
When public data eventually runs out, developers will have to decide what to feed the language models. Ideas include data now considered private, like emails or text messages, and using "synthetic data" created by other AI models.
Besides training larger and larger models, another path to pursue is building more skilled training models that are specialized for specific tasks.
"Artificial intelligence systems like ChatGPT could soon run out of what keeps making them smarter — the tens of trillions of words people have written and shared online. A new study released Thursday by research group Epoch AI projects that tech companies will exhaust the supply of publicly available training data for AI language models by roughly the turn of the decade -- sometime between 2026 and 2032. Comparing it to a "literal gold rush" that depletes finite natural resources, Tamay Besiroglu, an author of the study, said the AI field might face challenges in maintaining its current pace of progress once it drains the reserves of human-generated writing. …
In the short term, tech companies like ChatGPT-maker OpenAI and Google are racing to secure and sometimes pay for high-quality data sources to train their AI large language models – for instance, by signing deals to tap into the steady flow of sentences coming out of Reddit forums and news media outlets. In the longer term, there won't be enough new blogs, news articles and social media commentary to sustain the current trajectory of AI development, putting pressure on companies to tap into sensitive data now considered private — such as emails or text messages — or relying on less-reliable "synthetic data" spit out by the chatbots themselves.” https://www.foxnews.com/tech/ai-language-models-running-out-human-written-text-learn-from. “AI language models are running out of human-written text to learn from Artificial intelligence developers may turn to private data or to steady sources of human writing, like Reddit, Wikipedia, news and book publishers”, Associated Press, 6/6/24.
Recreating an AI “Norman Bates”
We should question how an advanced AI system would respond to ISIS beheading other humans in the name of Allah. Or, what message would be sent by a torture video of four young people brutalizing a physically disabled individual who thought they were his friends. Add to this a vicious gang rape of the kind that has been streamed live on the Internet by vile perpetrators apparently thinking they were “cool”? What about the murders, rapes, torture and the like done by Hamas on October 7, 2023 and the Israeli response? Would AI systems decide who was most at fault, and would potentially competing AI systems take sides?
There is probably a more than equal chance that, if AI systems do achieve significant levels of self-awareness, they will be as flawed and incomplete as are flesh and blood humans, and perhaps much more deadly. In responding to a proposal that AI/robotics systems could be taught ethics and morality by introducing them to the classics of literature such as Shakespeare and Jane Austen, John Mullan warns that could be quite dangerous because the classics are a “moral minefield”. Frankly, something tells me that humans would not emerge with glowing reviews from such an analysis. https://www.theguardian.com/commentisfree/2017/jul/24/robots-ethics-shakespeare-austen-literature-classics. “We need robots to have morals. Could Shakespeare and Austen help? Using great literature to teach ethics to machines is a dangerous game. The classics are a moral minefield”, John Mullan, 7/24/17.
Nor could we expect anything better from introducing AI to the Bible or Qu’ran, Sun Tzu’s Art of War, Musashi’s Book of Five Rings, the Marquis de Sade, and the incredible range of other reports and analyses on the historical behavior of humans. AI applications are, for example, already allowing a host of perverse “happenings” that are corrupting our societies in fundamental ways. These include the Dark Net, child pornography and “grooming”, terrorist communications, an increase in lies, attacks, false rumors and “fake news”, character assassinations, the intensification of governmental surveillance and repression, destructive hacking, and autonomous weapons development among other things such as Internet addictions and heightened aggressiveness and hate.
Such questions about what we are creating are important. Are we creating systems that can instantaneously access and use all knowledge, work from complex conceptual structures that order and integrate all knowledge and experience into a seamless whole? Are we creating systems that can recognize patterns humans are incapable of seeing, systems that apply the highest level skills of distinction and comparison, and learn not only from experience that is programmed into the AI or “Alternative Intelligence” system but is gained from experiences the system itself invents or experiences independent of its creator?
What if, in the very near future given the accelerated rapidity at which AI systems are progressing in capability, such systems begin to create themselves with capabilities, conclusions, and aims over which we lack control or understanding? I fear we are now living through a real life version of Disney’s “The Sorcerer’s Apprentice” in which our “genius” innovators like Sam Altman, Bill Gates, Geoffrey Hinton, Sundar Pichai, Ben Goertzel and others who are seeking power, wealth, and ego gratification by demonstrating their technological brilliance are playing the role of the ignorant Apprentice who possesses the ability to utter magical incantations but lacks the understanding and wisdom concerning what is occurring or how to stop it once set in motion. That apprentice is saved only because the master Sorcerer returns in the nick of time and reverses the spell. If Softbank’s Masayoshi Son is even close to being correct in his prediction of a 10,000 IQ AI system coming into being in the next few decades, or even “only” 1000 or 2000 IQ levels, that is precisely what we are doing.