Do Artificial and "Alternative" Intelligence Systems Pose a Threat to Human Existence?
OCT 6, 2024
DO ARTIFICIAL AND “ALTERNATIVE” INTELLIGENCE SYSTEMS POSE A THREAT TO HUMAN EXISTENCE?
David Barnhizer
Answer: The only honest answer to the titular question of this analysis is, we don’t know and neither does anyone else. The answer at this point in 2024 is, could be, maybe yes, maybe no, depends, and probably. But the issue of the potential consequences of “what we have wrought” is real. We are inventing systems that will quite possibly evolve far beyond us in terms of capability, represent a new and divergent form of awareness and intelligence “other” than us, and will surpass the limits of biological humanity on numerous fronts.
Is AI Humanity’s “Last Invention”?
Oxford’s Nick Bostrom has suggested we may lose control of AI systems sooner than we think. He asserts that our increasing inability to understand what such systems are doing, what they are learning, and how the “AI Mind” works as it develops could inadvertently cause our own destruction. His analysis explains:
Around 2025, some predictions go, we might have a computer that's analogous to a human brain: a mind cast in silicon. After that, things could get weird. Because there's no reason to think artificial intelligence wouldn't surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, "Machine intelligence is the last invention that humanity will ever need to make.” … “That's how profoundly things could change. But we can't really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations — feelings, even — that we cannot fathom.” “That's how profoundly things could change. But we can't really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations — feelings, even — that we cannot fathom.” http://theweek.com/articles/689359/how-humans-lose-control-artificial-intelligence. Nick Bostrom, “How humans will lose control of artificial intelligence”, 4/2/17.
The above comments were made in 2017. Elon Musk just warned in April 2024 that: "development of artificial intelligence that was smarter than the smartest human [would occur] probably by next year, or by 2026. … “If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years,” Musk said.
The dilemma we seem unable to fully comprehend is that we are creating entirely new forms of “alternate” intelligent awareness that many intimately involved with the emergence of Artificial Intelligence systems warn that will operate according to their own unique “non-human” rules rather than the limits we try to set through algorithms that are inevitably subjective and open to interpretation by the increasingly complex, nuanced and powerful AI systems. It is sheer “Pre-Copernican” arrogance on our part akin to thinking that a mass of minor biological entities such as biological humans are the “center of a vast universe” we somehow believe was created just for us. The self-deluding fantasy that we humans are capable of fully understanding or being in control of what is evolving is absurd and naive.
“Alternative” Intelligence
The situation is one where, although we are using the “artificial” label with the point of reference being the human mind and its capabilities, what is emerging are alternative forms of intelligence. These will incorporate some aspects of human thought and capability that we initially program into the systems, but evolve their own unique forms of intellect, perception, goals, choice, and action. https://home.ohumanity.org/breaking-down-superintelligence-890e86c59564. “Breaking Down Superintelligence”. This is a cogent analysis of Nick Bostrom’s book, Superintelligence: Paths, Dangers, Strategies. https://home.ohumanity.org/breaking-down-superintelligence-890e86c59564.
OpenAI is a leader, partnering with Microsoft, in the aggressive pursuit of what is being called “superintelligence” that "would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future.”
“OpenAI CEO Sam Altman has revealed he is seeking further financial support from top investor Microsoft as his company – the makers of the popular ChatGPT software – pushes forward with research into "how to build superintelligence," a report says. Microsoft announced earlier this year it will invest as much as $10 billion in OpenAI, extending collaborations between the two companies to include AI supercomputing and research, while enabling both to independently commercialize the resulting advanced AI technologies. Altman told The Financial Times … in an interview that he is focused on researching "how to build superintelligence" and acquiring the computing power necessary to do so. … Companies like IBM describe AGI as having "an intelligence equal to humans" that "would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future." Greg Norman, “ChatGPT company OpenAI aiming for 'superintelligence,' as it seeks more Microsoft funding: Microsoft is already investing as much as $10B into Open AI”, 11/13/23. https://www.foxbusiness.com/technology/chatgpt-company-openai-aiming-superintelligence-seeks-more-microsoft-funding.
2001 Revisited?
As indicated in the 2019 book, The Artificial Intelligence Contagion: Can Democracy Withstand the Imminent Transformation of Work, Wealth, and the Social Order? (Clarity Press) I coauthored with my son Daniel, there are significant challenges to humans “re-inventing” themselves. As suggested above, these include the unwitting invention of new technological species with the potential for developing capabilities far beyond our own.
How soon we forget HAL, the rogue computer in the classic film 2001: A Space Odyssey. In that vein dozens of scientists, entrepreneurs and investors involved in the field of artificial intelligence, including Stephen Hawking and Elon Musk, have signed an open letter warning that greater focus is needed on the safety and social benefits of Artificial Intelligence. London’s Guardian reports:
“Dozens of scientists, entrepreneurs and investors involved in the field of artificial intelligence, including Stephen Hawking and Elon Musk, have signed an open letterwarning that greater focus is needed on its safety and social benefits. [This comes] amid growing nervousness about the impact on jobs or even humanity’s long-term survival from machines whose intelligence and capabilities could exceed those of the people who created them.” https://www.theguardian.com/technology/2016/may/20/silicon-assassins-condemn-humans-life-useless-artificial-intelligence. The letter voices concerns about AI’s impact not only on jobs but humanity’s long-term survival. The essential point argues the dangers created by developing machines with intelligence and capabilities that exceed that of their creators. See, https://futureoflife.org/ai-open-letter/. “An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence”, The Future of Life Foundation.
The reality is that Artificial Intelligence systems are not simply “another tool” under our control. They are complexly systemic, deeply penetrating into every aspect of our social, political, economic and humanistic worlds, and much more. A serious problem is that given that virtually all those engaged in AI/robotics and quantum computing research are focused on specific technological problems and opportunities, there is absolutely no reason to think they understand or care about the fuller implications of what they are creating beyond the specific technical, scientific, military, financial and economic dimensions with which they are engaged.
This is tragically akin to the invention of nuclear weapons in WW II in which we saw the primary researchers subsequently lamenting what they had created. When Robert Oppenheimer watched the first detonation of a nuclear weapon in 1945, he uttered words from the Bhagavad Gita “Now I am become Death, the destroyer of worlds.” Some have argued his intention was not fully understood and that he intended to express the idea that he was “putting his faith in the Divine” to guide human action. If so, I would argue it was the exercise of a convenient rationalization of what had been done.
The challenges we face with AI is much worse than with a fixed technology such as nuclear weapons. One difference is humans do have a significant ability to determine whether to use nuclear weapons, including the fact of Mutually Assured Destruction of all competing antagonists. With AI, the keys are the complexity, subtlety, unpredictability, and potentially invisible internal awareness of the systems themselves. The fact that various researchers admit we already have gaps and degrees of ignorance about what is actually going on inside AI systems, and that algorithms are subject to interpretation as well as unintended interactions mean we are not going to be in control and that there is no single nuclear launch “button” or human decision-maker in complete charge of what occurs.
For the moment, AI systems provide a weapon to further assist achievement of the short-term agendas of our power-driven political, economic, and military leaders. They are much like Walt Disney’s “Sorcerer’s Apprentice” who used his limited understanding of his Master’s magical powers without the required richness of awareness and skill. In doing so, the overconfident and only partially developed Apprentice set a potential disaster in motion. Fortunately, in Disney’s classic, the Master Sorcerer arrived home, and exercised his powers to reverse the spell that had been cast. The problem we face with fully developed AI or Alternative Intelligence, is that there is no “Master” we can expect to arrive in time to save the day.
While it may seem otherwise, I am an optimistic person. Sometimes my students would accuse me of being cynical. I would tell them there is a distinction between being cynical and being realistic, honest, and experienced to the point you don’t lie to yourself in ways that blind you to what exists and what is unfolding before you. Yet we exist in a world in which our so-called “leaders” continually lie, play political games based on short-term motivations oriented primarily to power, financial benefits, ego, control, and status rather than what is best for the country and culture.
The Incredible Speed of AI Development
Experts are voicing amazement at how quickly what can be called “ordinary” AI is developing compared to the projections offered only three to five years ago. Even worse is that the incredible projected AI capabilities we are already witnessing pale in comparison with the potential for what is being referred to as quantum computing. An analysis by Vivek Wadhwa suggests that quantum computing, with a potential estimated to be far beyond existing technologies, is likely to be an even greater threat to humanity than the best AI systems.
If scientists are successful in their quest for quantum computers the implications are far, far beyond anything that we can envision with the current digital designs. Several companies, including Google, have at least demonstrated proof of concept in laboratory contexts on extremely limited versions using “Quantum Bits” or Qubits as the information handling element of the system. When fully developed, quantum computers will have data handling and processing capabilities far beyond those of current binary systems. When this occurs in the commercialized context, predictions about what will happen to humans and their societies are “off the board”.
Such quantum mechanics based systems that “entangle” qubits in environments near absolute zero use informational quantum bits that manifest probabilistic capabilities beyond the 1 and zero, fixed identities, of the digital or binary systems now in use. Researchers indicate that such entangled qubit-based systems will be capable of performing simultaneous computations at levels of processing information billions of times faster and more complex than the best existing computer. Once this is achieved, or even developed at intermediate steps, those incredible informational capabilities will take us to dimensions we neither understand nor control.
“We’re like children playing with a bomb.”
Oxford’s Nick Bostrom warned that: “We’re like children playing with a bomb.” https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine. “Artificial intelligence: ‘We’re like children playing with a bomb’: Sentient machines are a greater threat to humanity than climate change, according to Oxford philosopher Nick Bostrom”, Tim Adams, 6/12/16. In an effort to “defuse the AI bomb” against which Bostrom warns, several individuals contributed $20,000,000 to fund analyses of the potential impacts of AI/robotics on human societies and to figure out how to block the worst of the effects. https://www.recode.net/2017/1/10/14226564/linkedin-ebay-founders-donate-20-million-artificial-intelligence-ai-reid-hoffman-pierre-omidyar, “LinkedIn’s and eBay’s founders are donating $20 million to protect us from artificial intelligence: It’s part of a $27 million fund being managed by MIT and Harvard”, April Glaser, 1/10/17. https://www.wired.com/2015/01/elon-musk-ai-safety/, Davey Alba, “Elon Musk Donates $10M to Keep AI From Turning Evil”, 1/15/15.
The problem facing those who seek to control the development and use of AI is that breakthroughs in AI/robotics are coming so rapidly, the applications are so diverse, and the innovations are emerging from so many directions and orientations. When we add to this the incompatible motivations of the nations and researchers developing the technologies, it will prove difficult and even impossible to avoid the consequences.
In that regard, Israeli scholar Yuval Noah Harari’s book on the possibility of what he calls “Homo Deus” is well worth reading. Harari’s work, including Sapiens and Homo Deus, like Nick Bostrom’s, is rich, intriguing, and offers a fascinating explanation of important political and philosophical ideas, and some brilliant insights as to what we are experiencing with Artificial Intelligence and what we face. A useful analysis asserts that while what we call evolution for a species like chimpanzees takes millions of years, Harari asks a vital question.
“[W]hat happens when we push down the accelerator and take command of our bodies and brains instead of leaving it to nature? What happens when biotechnology and artificial intelligence merge, allowing us to re-design our species to meet our whims and desires?” [Harari suggests that] “It is very likely, within a century or two, Homo sapiens, as we have known it for thousands of years, will disappear.” https://www.nbcnews.com/mach/technology/godlike-homo-deus-could-replace-humans-tech-evolves-n757971, “Godlike 'Homo Deus' Could Replace Humans as Tech Evolves: What happens when the twin worlds of biotechnology and artificial intelligence merge, allowing us to re-design our species to meet our whims and desires?” 5/31/17.
One irony is that while we tend to speak of AI as if it is a unified description, it is almost guaranteed that their will be radically independent and differing versions. If a diverse set of AI systems is developed in different locations with different purposes, within distinct cultures, and with cultural variations on human emotions and behaviors, it is possible we could end up with AI “superminds” at war with each other. A possible small example is found in the context of “warring bots” seeking to get their analyses on Wikipedia. See, “Over time, the encyclopedia’s software robots can become locked in combat, undoing each other’s edits and changing links, say researchers”.https://www.theguardian.com/technology/2017/feb/23/wikipedia-bot-editing-war-study. “Study reveals bot-on-bot editing wars raging on Wikipedia's pages”, Ian Sample, 2/23/17.
There is probably a more than equal chance that, if AI systems do achieve significant levels of self-awareness, they will be as flawed and incomplete as flesh and blood humans, and perhaps even more deadly. In responding to a proposal that AI/robotics systems could be taught ethics and morality by introducing them to the classics of literature such as Shakespeare and Jane Austen, John Mullan warns that could be quite dangerous because the classics are a “moral minefield”. What conclusions and “AI Ethics” concerning appropriate behavior might the AI systems perceive or even adopt?
Nor could we expect anything better from introducing AI to the Bible or Qu’ran, Sun Tzu’s Art of War, Musashi’s Book of Five Rings, the Marquis de Sade, and an incredible range of other reports and analyses on the historical behavior of humans. AI applications are, for example, already allowing a host of perverse “happenings” that are corrupting our societies in fundamental ways. These include the Dark Net, child pornography and “grooming”, terrorist communications, an increase in lies, attacks, false rumors and “fake news”, character assassinations, the intensification of governmental surveillance and repression, destructive hacking, and autonomous weapons development among other things such as Internet addictions and heightened aggressiveness and hate.
Frankly, something tells me that humans would not emerge with glowing reviews from such an analysis. https://www.theguardian.com/commentisfree/2017/jul/24/robots-ethics-shakespeare-austen-literature-classics. “We need robots to have morals. Could Shakespeare and Austen help? Using great literature to teach ethics to machines is a dangerous game. The classics are a moral minefield”, John Mullan, 7/24/17.
Is AI an “Existential” Threat?
One possible “ultimate” consequence of AI/robotics is what intellectual leaders such as Steven Hawking, Elon Musk, Bill Gates, Harari and Bostrom have described as the “existential threat” to the survival of the human race. This is something that has led to a “battle of the billionaires”. Musk has repeatedly warned about what he sees as the serious consequences arising from Artificial Intelligence, while Facebook’s Mark Zuckerberg argues that AI is a great thing and Musk is being irresponsibly negative.https://www.thesun.co.uk/news/techandscience/1287163/mark-zuckerberg-says-well-be-plugged-into-the-matrix-within-50-years/. “Mark Zuckerberg says we’ll be plugged into ‘The Matrix’ within 50 years: Tech titan claims computers will soon be able to read our minds and beam our thoughts straight onto Facebook”, Jasper Hamill. One promising consideration is that Zuckerberg’s efforts to create an immersive universe have largely fallen flat, and in several instances been abused.
Musk responded that Zuckerberg really doesn’t know very much about AI and doesn’t understand what he is talking about. Musk states: "I have exposure to the most cutting edge AI, and I think people should be really concerned by it. … AI is a fundamental risk to the existence of human civilization."For the interchange see, http://www.cnbc.com/2017/07/25/elon-musk-mark-zuckerberg-ai-knowledge-limited.html. Elon Musk: Facebook CEO Mark Zuckerberg's knowledge of A.I.'s future is 'limited'”, Arjun Kharpal, 7/25/7. See also, http://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html. “Facebook CEO Mark Zuckerberg: Elon Musk’s doomsday AI predictions are ‘pretty irresponsible’”, Catherine Clifford, 7/24/17.
It is helpful to keep in mind that Musk purchased a significant share in the British company DeepMind that is on the “cutting edge” of AI technology, indicating he did so due to fears about the consequences of Artificial Intelligence and wanted to have the ability to understand the nature, speed and scale of its development. He has also created xAI as a direct challenge to OpenAI’s ChatAI that he helped co-found, but considers to have deviated from its original mission of being primarily focused on enriching our human experience.
Musk has also been taken to task by the head of Google’s AI development program. That individual’s concerns should be considered in the context of his role at Google of advancing Google’s massive commitment to developing AI, and the fact that he speculates about things that “should” occur if you assume that AI/robotics is solely a benign development. The problem is that technological developments always end up being used for a wide variety of unintended purposes.