The world is awash with AI these days. ChatGPT grabbed the headlines last year. With Microsoft’s one-billion-dollar investment in Open AI and its Bing CoPilot and Google AI’s Gemini, you use AI every time you search the web. If you are on LinkedIn (a Microsoft company), every other post is about AI and its use in big business, and AI is embedded in the LI platform itself—for job recommendations, image credits, people you may know, profile building and more. Meta’s Facebook and its Messenger, Instagram, and WhatsApp suite of services use AI to analyze every picture and video you look at, every “like” you click or post you make, all under the banner to improve your “experience.” But from a business point of view, for Meta and all social media companies, the data and AI driven analytics are used either to sell commercial advertisements and to mine and monetize for additional revenue. Every social media App and platform uses AI. Every major business uses AI for core business purposes and for running the business: from choosing or changing the core business model to analyzing and charting strategy; to operate and secure the business; to solve and accelerate the ever changing needs of the IT infrastructure, both the fixed and on-demand requirements using public/private and on-premise/off-premise cloud computing and Edge computing; to develop new products and services, including advancing the Internet of Things (IoT); to manage the ever more complex global 24/7-365 suite of business and customer interface applications and for analyzing, managing and performing user support (inside the business and for its customers), to reduce errors and “enhance the experience.” In the world of healthcare and Big Pharma AI has become part of the core business for research and management: e.g., in diagnostics and robotics, in research and discovery of new drugs, such as the COVID-19 mRNA vaccines. Generative AI is the fastest growing tool in every pharmaceutical company in the world. And AI is sweeping the worlds of agriculture, logistics and supply chain management, law, national security, geopolitics and warfare. It is simply everywhere all the time. It is in every sphere of life: media, universities (especially in research programs and scholarly work), economic forecasting and central banks, and in every government and defense department in the world. And every entity that uses AI for their own positive purposes—social, economic, political, and defense, etc.—must protect itself from bad actors who use AI for nefarious purposes.
In the 1940s, mathematicians and pioneers of modern computers Alan Turing (the famed WWII codebreaker) and John van Neumann had come to believe that computers and the human brain were analogous. Given this analogy, to them it seemed obvious that “human intelligence could be replicated in computer programs.” (Melanie Mitchell, Artificial Intelligence (2017))
In 1956 Dartmouth mathematics professor John McCarthy and some close friends and colleagues formed a two-month long 10-man workshop to explore creating a thinking machine. McCarthy said he needed a name for the purpose of the workshop, so he coined the term “Artificial Intelligence,” though he later admitted that his originally convening colleagues—Marvin Minsky, Claude Shannon (inventor of information theory) Oliver Selfridge, Ray Solomonoff, and Peter Milner—didn’t like the term. The reason, according to McCarthy’s biographical memoir, was because the actual goal was to create a machine with genuine intelligence not artificial intelligence. With that goal in mind and ever since, the raison d’être of AI research has been to achieve machine intelligence equal to or exceeding human intelligence.
So, what exactly is AI? AI is a complex system of hardware and software technologies built to compile and analyze extremely large data sets very quickly using complicated algorithms[1] to produce pattern-based predictive and “creative” guess outcomes, “insights,” and actions (as in robotic and automated physical machine systems designed to mimic human behavior). AI systems are built on how neuroscientists understand the brain works. Here’s how: Neurons receive a vast array of perceptions (sometimes unconscious) that trigger electrical and chemical inputs connected by synapses. The neuron weighs the inputs and when they reach a “threshold,” the synapse fires. Synapses have different strengths that create stronger and lesser connections, thus certain inputs get greater or lesser weight. “Neuroscientists believe that adjustments to the strength of connections between neurons is a key part of how learning takes place.” (Mitchell, 24) From early on AI systems were built on this understanding with a similar but simplified structure using what were called “perceptrons”: in this model perceptrons received an array of inputs with different assigned (programmed) weights; when the sum reached an assigned (programmed) threshold, the output yes or no (0s and 1s) renders the “decision.” Though some details have change over time, this remains the basic conceptual model. Repeating this process over and over again using algorithms, the computer “learns.” This is what generative AI does with what are called large language models (LLMs) and neural network models (Deep Learning). This method is intended to imitate human thought, cognition, learning, reasoning, and creativity based on prevailing theories of how the human brain, cognition, and imagination work as a complex network of physical and biological processes.
A lot has happened in the world of computing and AI since 1956. But suffice to say, here’s where we are today. IBM breaks it down this way. There are three kinds of AI: Artificial Narrow AI; General (or Strong) AI; and Super AI. Only the first, Artificial Narrow AI, exists today. The other two are part of the McCarthy and Co. dream imagined those many years ago.
Within these three kinds, IBM identifies four functional types: Reactive; Limited Memory; Theory of Mind; and Self-Aware. Only Reactive and Limited Memory exist today.
IBM says, “Reactive machines are AI systems with no memory and are designed to perform a very specific task. Since they can’t recollect previous outcomes or decisions, they only work with presently available data. Reactive AI stems from statistical math and can analyze vast amounts of data to produce a seemingly intelligence output.” Examples of Reactive AI are IBM’s Deep Blue winning again chess grandmaster Garry Kasparov in the late 1990s and Netflix recommending movies based on what you have watched previously.
Limited Memory AI can recall past events and outcomes for specific moment decisions for a short period of time, but it cannot store memory in a library for recall and use later. However it can be trained, it can learn (this is what is called machine learning and Deep Learning). Generative AI tools such as ChatGPT, Google’s Gemini and DeepAI “rely on limited memory AI capabilities to predict the next word, phrase or visual element within the content it’s generating.” Virtual assistants like Siri, Alexa, Google Assistant, and self-driving cars are examples of Limited Memory AI. Reactive and Limited Memory AI exist today in varying ways.
Theory of Mind AI and Self-Aware AI do not exist. They fall under General AI and Super AI. Theory of Mind “functionality would understand the thoughts and emotions of other entities. This understanding can affect how the AI interacts with those around them.” Self-Aware AI “would possess super AI capabilities. Like theory of mind AI, Self-Aware AI is strictly theoretical. If ever achieved, it would have the ability to understand its own internal conditions and traits along with human emotions and thoughts. It would also have its own set of emotions, needs and beliefs.”
Reactive and Limited Memory AI are everywhere today and expanding fast. The 2023 Stanford Emergning Technology Review says Generative AI alone “is estimated to raise global GDP by $7 trillion and lift productivity growth by 1.5 percent over a ten-year period, if adopted widely.” As mentioned above, the use of AI is in every industry and every sector of the economy, in governments, law, geopolitics, and national defense and warfare. The take-up of these two types of AI is so fast it is impossible to keep track.
It is indubitable that Reactive and Limited Memory AI offer huge efficiencies and technical capability advantages—e.g., in medical diagnostics and research; in data access and analysis for a host of beneficial uses, etc. But even in these are buried questions of goodness or evil—e.g., are efficiency and the ability to crunch more data faster a moral good as such? Is the technologization of everything a societal good? Others argue that it’s not merely the AI technology itself that is of concern, but rather what it takes to create AI—i.e., it’s the ecological impact through extraction of core materials mining, the abusive sweat shops required to create devices, and the electrical power requirements necessary to run the data centers full of servers, etc. (See Kate Crawford, Atlas of AI, (2021)) Nevertheless, the genie is out of the bottle. Whether these two currently realized dimensions of AI are a good thing or a bad thing, they have already and will continue to affect every sphere of life. We must think wisely about them.
But none of the concerns from these realized dimensions of AI are even the penultimate concern. To see this, one need only to turn back to Alan Turing and John van Neumann’s view that “human intelligence could be replicated in computer programs,” and the Dartmouth workshop vision to achieve genuine intelligence in machines not artificial intelligence. That is, the penultimate concern is the Theory of Mind and Self-Aware AI. This concern is what prompted many of the AI pioneers and current leaders of AI recently to sign an open letter and testify before Congress urging caution. Here lies the very legitimate transhuman concern of AI.
The ultimate concern, which is never talked about but which clouds good judgment on the AI pursuit in general is the materialist metaphysic and the computational theory of mind that are embedded in it. This metaphysic is an inheritance from the Enlightenment and the computational theory of mind is the inheritance of the host of new philosophies and sciences that grew up in the late nineteenth and first half of the twentieth century—e.g., neuroscience, neuro-philosophy, neurobiology, cognitive science, linguistic science, and evolutionary psychology, etc.
AI is the “embodiment” of the modernist naturalistic metaphysic that material is all there is. It declares there is no activating “form” (soul), no activating potentiality into actuality, as Aristotle said. In the materialist (physicalist) theory of mind, everything is merely physical. There is no difference at all between the human person or the human mind and a sophisticated computing machine. There is only some not yet fully understood biophysical animating event which makes, for example, a mere body a living conscious organism. Consciousness, in this metaphysic, is only an epiphenomenal conjunction of matter. In this metaphysic, the “self-aware” and “super” AI become the equivalent of a conscious human being. The term “consciousness” loses its meaning. The living being loses its soul.
Technologies are built on point-in-time referents to extant thought and discoveries. As such, technologies are the instantiation or the embodiment of particular ideas and scientific discoveries. Technologies are unwitting “carriers” of these ideas. The technologies we use are not inert. They are purveyors of the ideas and discoveries instantiated in them. You might say, AI theory has at its core a kind of intellectual malware: a materialist computational theory of mind.
The ultimate question for AI is the meaning of the human person and human consciousness. The current scientific and AI theories cannot account for the soul (the psūchê in Greek, the nephesh in Hebrew). These theories have no soul (or form), the animating substance which makes a mere body a living conscious organism. Neuro-biophysical processes alone cannot account for this. Many theories of mind have been proffered, but Aristotle’s (and Aquinas’s) integrative hylomorphic matter-soul theory is still the most compelling explanation. The great confusion created with such a reductionist materialist metaphysic is to assume that if one understands how something works, one understands what the phenomenon is. As Socrates put it to his friend Cebes in the Phaedo, “There is surely a strange confusion of causes and conditions in all this.” Cebe’s doesn’t understand why Socrates doesn’t try to escape his imprisonment. Is it his body that is the cause of his condition? Socrates explains it is not his body that is the cause—his state is not determined by his body. Socrates explains “the Athenians have thought fit to condemn me, and accordingly I have thought it better and more right to remain here and undergo my sentence.” The true cause is Socrates’s will that keeps him in prison, his desire to live virtuously. To be only his body is not to be his whole self. This is the same confusion philosopher Roger Scruton identifies in his book The Soul of the World (2014, 37–38) when he speaks of the difference between what the acoustician and the musician hear in sound. Beethoven hears life as a symphony in musical space; the acoustician hears mere pitched sound.
[1] What are algorithms? It’s a “’recipe’ of steps a computer takes to solve a particular problem.” (Mitchell, 28). “An algorithm is a set of defined steps designed to perform a specific objective. This can be a simple process, such as a recipe to bake a cake, or a complex series of operations used in machine learning to analyze large datasets and make predictions. In the context of machine learning, algorithms are vital as they facilitate the learning process for machines, helping them to identify patterns and make decisions based on data.” What is an Algorithm? Definition, Types, Implementation | DataCamp. The process works like this: Data Inputs; Processing that data – this is the core function, where the processing steps (the logical and arithmetical calculations occur, the written algorithm) are done and repeated in a loop until the problem is solved; resulting in an output. Think of how the thermostat in your house works: temperature is received into the sensor (the input); the thermostat processes (calculates) the input according to the algorithm; if the temperature is lower/higher than the setting, the thermostat triggers the air conditioner or heater to turn on (the output). Once complete (problem is solved), it stops. This is the final step.