AI and THE DEATH OF WORK 2
“Imagination is the product of organic biochemistry. AI is not limited by that” Yuval Noah Harari
“Imagination is the product of organic biochemistry.
AI is not limited by that” Yuval Noah Harari
David Barnhizer
Consider, just for a moment, the implications of the Harari’s statement about AI systems not being limited by the intricacies of human brain cells. We have been thinking and evaluating the existing and potential capabilities and evolving capabilities of Artificial Intelligence from the perspective of what a silicone-based technology could possibly do. That, in essence, is the core of the Turing Test. The limits of human intellectual capability, and with ChatGPT and the like they are very significant, are nonetheless bounded (or so we are assuming) by what “WE” could do.
The problem with the Turing human capability mindset, however, are as Harari sets out, the physical chip technology mindset is not the limiting factor we thought that applied to the advances of AI. The implications of this are set out in the following exchange between Stephen Colbert and Yuval Noah Harari.
The Late Show host Stephen Colbert went far out of his “comfort zone” when he spoke to a brilliant Israeli historian about the future effects Artificial Intelligence will have on human civilization. Colbert oddly argued that he looks forward to when society is run by machines instead of people. In one sense I can understand the rhetoric because it is clear that, independent of our pre-Copernican rhetoric in which we somehow believed that humans were the God-given center of the universe, our actual behavior leaves a great deal to be desired. So even though in a “real world” sense I find Colbert more than a bit “off the wall” and far out of his depth in challenging Israeli futurist genius Yuval Noah Harari, author of Homo Deus and Sapiens, Harari’s responses offered great insight as do his published works.
Are We Going Through a Period of Accelerating Change?
In initiating the interaction Colbert asked, "Is it real that we are actually going through some sort of accelerating change?” Harari responded in a way that is critical for both educational and work systems. From the perspective of what educational systems should be offering our youth, the implications of Harari’s warning are that we don’t know, but that it will not simply be a repetition of prior educational strategies. The same holds true for our workplaces and for the systems that have been created to prepare workers to succeed.
The fact is that we are in an indeterminate state of limbo in which the shifts are so dramatic that we can’t afford to allow those who have been operating under traditional norms created by the systems of the past, control the processes that determine how we cope with and design the systems of the future. In responding to Colbert’s question about whether we are going through a period of accelerating change, Harari responded:
“[T]his is genuinely the case for the current generation, because while people of the past could not predict invasions or disasters, “The basic stuff of human life, like the basic skills,” were what remained consistent. He later explained that for previous generations, “you’re going to need to teach your kids how to plant rice and wheat, how to ride a horse, how to shoot a bow, because it will still be relevant in 20 years.” The current year is different, he argued, because “Today nobody has any idea what to teach young people that will still be relevant in 20 years.”
The New AI Systems Can Be Independent of “Us”
Harari: ”It’s extremely dangerous," Harari, "to give power to something we don’t understand.” Harari also responded that AI is radically different, even from previous world-changing technologies like nuclear weapons or the printing press.
"We made them, but now they become [sic] potentially independent of us," he said. "The one thing to know about AI, the most important thing to know about AI, it’s the first technology in history they can make decisions by itself and can create new ideas by itself. People compare it to the printing press, to the atom bomb. No, it’s completely different.”
Harari described the speed of the technological transformation we are facing:
"AI comes, and within a few years, plays [the intricate game of GO] like no human ever imagined that it was possible to play. This can happen in more and more fields, and we have never encountered anything like that before. Because every previous information technology, it simply copied and disseminated our ideas. The printing press just produced more books. Television just broadcast our thoughts," he said. "Here we have something that can create entirely new ideas which are not even bound by the limits of our imagination. Our imagination is the product of organic biochemistry. AI is not limited by that.” See, “Colbert clashes with guest about AI's future, says he's 'ready for the machines' to be in charge: Colbert declared he is ready for AI that has been 'programmed by fellows with compassion and vision' to steer society.” Alexander Hall, March 5, 2024https://www.foxnews.com/media/colbert-clashes-guest-ais-future-says-ready-machines-charge.
“Imagination is the product of organic biochemistry. AI is not limited by that”
Harari’s insight that while human “imagination is the product of organic biochemistry. AI is not limited by that” is both simple and profound. For seventy years or so, we have framed the upside potential of AI in terms of the limits and nature of the human mind and thought processes according to the test fashioned by Alan Turing. The assumption is that AI would be our “tool” to apply as we choose. The problem with this human-centered approach is that the systems we are creating are not human, they are not “us”, and they are not “tools” under the ultimate control of their creators.
Nor is it safe or intelligent to use the classic Turing Test of human intelligence levels as the standard of comparison. A number of individuals and researchers working on AI technologies have used the term “SuperIntelligence” in describing where AI is going. Masayoshi Son, the head of Japan’s Softbank, predicts that AL systems will in the near future achieve levels of intelligence relative several orders of magnitude beyond the IQ tests used to evaluate human intellectual capabilities.
There should be no doubt at this point that we have created “something” that is more accurately called “Alternative Intelligence” rather than Artificial Intelligence and the implications of this for humanity are scary. I’m not going to bother with the other researchers who are voicing and fearing such catastrophic results, but if you do even preliminary research on the topic the future becomes somewhat “unsettling”.
Poorly understood elements are lurking in the depths of AI systems. What some are suggesting about the accelerating evolution of Artificial Intelligence capabilities is already emerging. What is being witnessed by some highly experienced researchers and innovators in AI/Robotics is a situation in which the AI systems not only begin to learn from their own experiences, but write and rewrite their own algorithmic codes. In essence they are creating their own rapidly and unpredictably evolving “machine genome”, going through a wide variety of transmutations at generational speeds much more akin to rapidly mutating viruses than the far slower changes undergone by human biology.
What is being created is not sufficiently within our understanding, or even our ability to understand. Nor is it likely to be fully subject to our control. As this takes hold, we, the human race, are in trouble in numerous contexts, including work and the loss of its positive attributes in a healthy society. It is probably best at this point to think about the development of AI as a situation in which we are at a critical point in the process of creating a competitor, or competitors for that matter, that have no reason to think of us as benign, enlightened, or trustworthy.
The less than admirable track record of the human race is something the AI systems will have fully consumed, categorized and internalized as they absorb vast amounts of information on levels and in forms far beyond human capability. This AI perspective will be gained from an amazing array of sources. Many of those sources will not have a readily understandable context or nuance of the kind the systems are likely to understand or appreciate, but which will still be a part of their awareness.
Let me offer a “tiny” experience” from my childhood. I grew up with a wonderful extended family of the kind I doubt no longer exists. I have described it elsewhere and will not repeat here. See my Substack post, “All In the Family” ion interested.
Six Years Old and Surrounded by “Dead Guys” in Deep Pits
As that indicates historical blip, my Father fought in WW II and remained in the Army in Germany directing an officer’s club for Americans stationed in that arena of conflict. When I was six he left the Army and returned to the US and I saw him for the first time. Shortly after he returned, fourteen or fifteen of us were still living at my incredibly wonderful grandparents. One day, I made the “{mistake” of looking into the piano banish where Mom would play and first heard a song that still resonates within me—Frankie Laine’s “Ghost Riders in the Sky”.
The point, however, is deeper than that. One day, my totally independent six year old self was sifting through books that had been placed in the benches on which Mom sat to play. In the bench, I found Dad’s 101st Division Annual with George Patton. It was filled with pictures of tanks, m cannons, wonderful heroism and more. What could a six-year old boy ask?
BUT as I leafed through the pages, I found picture after picture of the darkest aspects of the human spirit. There were photo after photo of skeletal men walking around in striped pajamas. There were photos of such men being uncovered in deep pits into which their remains had been discarded, alive or dead. When you looked at those scenes, what was revealed was despair, hopelessness, and in far too many humans, a heart of darkness.
A six year old boy probably should not be exposed to the darkness that lies at the center of many human souls, or to the willingness to submit to that darkness. But, for me, that horrid vision of what humans are capable of is something for which I will be forever thankful. It has driven my life and I am not going to apologize for any of it or deceive myself about what those who capture power are capable of doing beyond the supposedly rational limits of their humanity. That, in part, are why we can’t allow Harari’s words to be ignored.
“The world isn’t ready, and we aren’t ready”
As if Harari’s warning about what is taking place were not a sufficient alarm, Jason Roose recently wrote of the concerns voiced by key researchers working for OpenAI. See, “OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance”, 6/4/24. News, https://dnyuz.com/2024/06/04/openai-insiders-warn-of-a-reckless-race-for-dominance/. First published in the NYT, Jason Andrew, https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html. Roose explains alarms being raised by a number of OpenAI’s core employees intimately involved with its technology.
“A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created. The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous. The members say OpenAI … is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can. … “In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. … Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years. He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent. … “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote.”
Whether performed by biological cells and neurons or designed into intricate silicon chips or through quantum dynamics, the idea of self-learning systems that grow and evolve through their own experiences sounds much like the way we would describe the trial and error ways humans learn from experience and adapt to the stimuli of their environment. This includes creating conceptual and interpretive structures that integrate, assess, and utilize what is learned. The central issue, however, is what are they learning?
Rather than humans, we are talking about the rapid development of AI/robotics systems that possess this adaptive learning capability. A report in the MIT Technology Review indicates just how rapidly the efforts are progressing. Will Knight explained six years ago what he saw happening in the context of the DeepMind system.
An AI program trained to navigate through a virtual maze has unexpectedly developed an architecture that resembles the neural “GPS system” found inside a brain. The AI was then able to find its way around the maze with unprecedented skill. … The work, published in the journal Nature, hints at how artificial neural networks, which are themselves inspired by biology, might be used to explore aspects of the brain that remain mysterious. But this idea should be treated with some caution, since there is much we do not know about how the brain works, and since the functioning of artificial neural networks is also often hard to explain. “AI program gets really good at navigation by developing a brain-like GPS system: DeepMind’s neural networks mimic the grid cells found in human brains that help us know where we are.” MIT Technology Review, May 2018. Will Knight, May 9, 2018.
Several years ago Jeff Hawkins and Donna Dubinsky offered perspectives about the categories into which AI research and machine learning fall. They described three basic approaches, “Classic AI”, “Simple Neural Networks” and “Biological Neural Networks”.
In our view, there are three major approaches to building smart machines. Let’s call these approaches Classic AI, Simple Neural Networks, and Biological Neural Networks. Our feeling is that the term “artificial intelligence” has been used in so many ways that it is now confusing. People use AI to refer to all three approaches… plus others…. The term “machine learning” is a more narrowly defined term for machines that learn from data, including simple neural models such as ANNs and Deep Learning. “We use the term “machine intelligence” to refer to machines that learn but are aligned with the Biological Neural Network approach. Although there still is much work ahead of us, we believe the Biological Neural Network approach is the fastest and most direct path to truly intelligent machines.” Jeff Hawkins & Donna Dubinsky, “What is Machine Intelligence vs. Machine Learning vs. Deep Learning vs. Artificial Intelligence (AI)?” 1/11/16. http://numenta.com/blog/machine-intelligence-machine-learning-deep-learning-artificial-intelligence.html.
An amazing amount of developmental work has been done just in the few years since Hawkins, Dubinsky, and Knight offered their insights. As Greg Norman writes about OpenAI’s technological mission, that company and numerous others are working on creating “Superintelligence”. See, e.g., Greg Norman, “ChatGPT company OpenAI aiming for 'superintelligence,' as it seeks more Microsoft funding: Microsoft is already investing as much as $10B into Open AI”, 11/13/23. https://www.foxbusiness.com/technology/chatgpt-company-openai-aiming-superintelligence-seeks-more-microsoft-funding.
Sam Altman, the CEO of OpenAI, stated that he is focused on researching and building Superintelligence systems. “[H]ow to build superintelligence" and acquiring the computing power necessary to do so. … Companies like IBM describe AGI as having "an intelligence equal to humans" that "would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future."
The idea is that a host of brilliant and diverse human minds with varying and even conflicting agendas are working on the challenges of creating “alternative” intelligence systems that have the ability to process experience. Even in the early phases of that research as Will Knight admits, those non-human systems are increasingly demonstrating the ability to teach themselves and use that new and expanding ability to improve and evolve. This ability is moving ahead with amazing rapidity.
As “deep learning” in AI systems improves, we should have no illusions about the ability of such systems to create conceptual structures that continuously improve and expand while taking characteristics we humans think vital as somewhat less than central. This involves the ability to internalize an enormous range of information far beyond a human’s ability to access, develop, process, interpret and utilize, and to assess that datas on criteria of meaning to the “AI Mind” rather than its ostensible creator. I’m not going to belabor the definitions here but machine learning, deep learning and algorithms are explained in greater depth at the following source. https://deeplearning4j.org/neuralnet-overview.html.
AI systems that engage in “machine learning” or “deep learning” with the aim on the part of their human designers that they think like humans and teach themselves as they gain experience and data are being worked on around the world and are evolving rapidly. See, “The birth of intelligent machines? Scientists create an artificial brain connection that learns like the human mind”, Harry Pettit, 4/5/17, http://www.dailymail.co.uk/sciencetech/article-4382162/Scientists-create-AI-LEARNS-like-human-mind.html.
As to the ultimate evolutionary levels of AI systems, an early example may help to understand what we face. Only eight years ago in 2016, a lifetime in terms of “AI years”, a Google DeepMind AI system (AlphaGo) played the complex strategy game Go against Lee Zedol, the world’s top expert, and easily defeated its human opponent. That was only the beginning. One year after defeating the world’s best human Go player, AlphaGo showed its evolution in 2017 by beating five champion players simultaneously. See, https://www.theguardian.com/technology/2017/apr/10/deepminds-alphago-to-take-on-five-human-players-at-once. “DeepMind's AlphaGo to take on five human players at once: After its game-playing AI beat the best human, Google subsidiary plans to test evolution of technology with GO festival”.
An innovator in the Artificial Intelligence field explains why AI is having such a sweeping impact in relation to traditionally human performed informational work. As reported in the Financial Times. Richard Waters explains:: “
We don’t describe what we’re doing as AI — we call it, ‘automating human-intensive knowledge work’.” … “Probabilistic techniques are used to “train” machines as they churn through the data, until they are able to see patterns and reach conclusions that were not programmed in at the outset.” Richard Waters, “Investor rush to artificial intelligence is real deal”, Financial Times, 1/4/15. http://www.ft.com/cms/s/2/019b3702-92a2-11e4-a1fd-00144feabdc0.html#ixzz3NxmiiO3Q. Waters made his comments nine years ago and the AI systems have changed amazingly since those “primitive” days.