A discussion with a knowledgeable friend on this triggered the following post, which will cover a number of elements of both the technology and, perhaps more importantly, its uses and impacts.
AI: Definitional Confusion in Action
A recent experiment in AI, "Moltbook", attempted to create a site in which AIs could communicate on their own social media, without human participation (though humans were able to read and observe. There were some interesting apparent results from this, which turned out to actually be mostly from human beings pretending to be AIs themselves. That was interesting in some ways, but also disruptive to the experiment itself.
But the discussion around it brought out one of the key, and subtle, differences between the way in which the professionals talk about their AI work and the way in which the more general civilian population, even highly educated ones, INTERPRET that talk.
The term used professionally is "AGI" -- Artificial General Intelligence. Achieving AGI is evaluated using a number of explicit, objective tests for the functionality, adaptability, and (apparent) understanding of the tested AI. Depending on exactly which tests are used, how they're applied, and so on, there are researchers who believe we are at, or very near, to AGI, and others who say we're not close.
But part of the problem is that the general perception of the word "intelligence" is not the same as the professional meaning of the term. For a human being, intelligence usually includes -- and, perhaps, even requires -- self-awareness. This is NOT PART OF THE PROFESSIONAL DEFINITION.
One might ask "wait, how can you be intelligent and not self-aware?". It's actually a hard question to clarify to a human being because ALL OF OUR EXPERIENCE of intelligence has come from living creatures, and all intelligent creatures, even extremely stupid ones, are aware of their own existence and have internal goals and drives that are separate from the outside world's necessary demands.
But intelligence, in the narrow sense, does not require any self-awareness. It doesn't even require what Searle called "intentionality" -- the ability of an entity to self-motivate itself to formulating a desire to perform a task and then put that desire into action.
Vernor Vinge mentions this directly in his classic novel A Fire Upon the Deep, when the terrifying near-godlike intellect that comes to be called the Blight or the Perversion is active: "self-awareness is overrated, really." It is implied that the Blight may have incomprehensible resources for analysis and understanding and carrying out its intentions, but those intentions are, in a sense, simply hardwired directives. It doesn't bother to stop and analyze WHY it does what it does; it may be utterly INCAPABLE of asking that question of itself, and its reply to "why" would be a rote statement of whatever reasoning its original creators included. But at the same time, it is capable of recognizing new challenges, formulating solutions to those challenges, preparing multiple contingencies, tricking other sentient and sapient beings into following its directions, and so on.
So, a statement of my own on this subject:
- I am ABSOLUTELY certain that our current version of AI either has, or will soon, pass most standard tests of AGI (with the possibly exception of the Wozniak "Coffee Test", or what I have always thought of as the Driving Test -- more on that later). AIs have already shown that they can be used to improve and even build anew upon mathematical and scientific research.
- I do not believe that we have yet APPROACHED making a system with self-awareness/true sapience/intentionality, and I think that will remain a very hard problem for a while yet.
- For a large number of tasks, 2) doesn't actually matter much. It matters for reasons of ethics, morality, and such, but not for "can the machine do this?"
- In a society in which human worth is taken as a given, and thus basic human decent living is NOT dependent on each human being performing labor simply to exist, the creation and dispersal of AI as a tool for art (of all kinds), for scientific research, for manufacturing efficiency, etc., would be potentially disruptive but not dangerous.
- In our current society, it is a dangerous threat to human civilization, not because AIs are evil Terminators, but because the people who are developing and controlling them and their use DO NOT CARE what happens to the vast majority of other people... and they will not be building any guardrails or controls inherently into the non-sapient, non-intentional AIs they are building.
- The top people in CONTROL (not in the research, but in the business arena) are also sufficiently self-confident, not to say arrogant, that they believe that this control will endure regardless of what AI is built.
The Current Dangers of AI -- Our Civilization and Industrial Arrogance
There are currently a lot of people fighting against AI deployment in their spaces. There are also a large number of people -- some absolutely in earnest, some simply driven by more cynical motives -- basically dismissing these concerns as being equivalent to the Luddite attitude against other technological innovations (factory operations, etc.).
AI, especially the coming AGI, presents a very different threat to our civilization than did any of the prior "labor saving device" revolutions. All of these prior changes still required human judgment, human understanding, and human control to perform their functions. Moreover, one such invention could only be applied to one particular field of endeavor; the spinning jenny might have made it easier to make lots of yarn at once, but it didn't solve the problem of how to turn the yarn into a sweater for a particular person, or how to milk cows, or how to forge steel.
Because of this, such inventions were self-limiting in their impact. They might be very disruptive to the textile industry, but the other industries around them were only affected insomuch as they were directly related to the textile industry. The same was true of other labor saving inventions; they had vast impact but, to an extent, limited and insulated. Moreover, they usually created new opportunities for human endeavor related to the change in their target field. People did indeed lose their jobs, but after a fairly brief time, new jobs appeared that exploited the LIMITATIONS of the innovation.
AGI is much more of a disruptive threat, because of that "General" bit. Truly general AI -- which, as I said, I expect to see in the not too distant future -- can be applied to nearly any kind of work -- and if released as its manufacturers intend, it will be applied to all fields at once -- as many as can be reached.
Moreover, because of its nature, it will also be applicable to the new tasks that are generated by their disruption. Which means that the usual pattern -- of limited disruption followed by a significant new opening field -- will no longer be followed.
And, unfortunately, our civilization remains built on the idea that if human beings, as a group, want to survive, they need to have jobs -- not necessarily things they want to do, but things that are more valuable when done by a human than done by a machine -- and preferably, things that CANNOT be done by a machine without the human assisting.
AGI, uncontrolled and used by businesses that are focused, not on human experience, but the bottom line for a small number of shareholders, will provide greater value in many, perhaps most, endeavors than any human. There is no "bottom line" clearly served by "human art" when a human artist costs $1000 to make one picture, and AI will make one nearly as good (perhaps, not too long from now, better) in two seconds for fifty cents.
Now, there may be SOME tasks which AIs will still have trouble at. The "Coffee Test" is, in short, that a robot should be able to enter a random home, find the coffee machine, locate the coffee beans for the machine, and then make a cup of coffee using the machine. My equivalent is that you should be able to load an AI into a car and have it be able to drive me to work while avoiding all of the situations that I, myself, have avoided while driving. Despite all the ballyhoo about various "self-driving" vehicles, AI is very far from actually making that requirement (to the point that one "robotaxi" company appears to actually have been using a squad of humans to perform remote driving tasks for their "automated" cars).
But it seems ridiculous to assume that this will remain an insuperable barrier. Driving, or other open-ended but clearly defined tasks of life, may be complex but it's not something terribly esoteric. If AI can produce "art" that 90% of the public finds acceptable (and it already approaches that level in several areas), one can take for granted that one day it will be able to drive cars and operate machinery better than 90% of the humans currently doing the job.
And that is, unfortunately, an absolutely TERRIBLE thing, because our world is run mostly by businesses that focus on an upper-level efficiency rather than the human results of the business. They will absolutely eliminate all the workers they can in order to keep the payroll to themselves. They will put 99% of the population out of work because they're "no longer needed". As businesses are no longer focused on the products, they don't even consider the issue of "who will buy the products" until after the fact.
And they will absolutely automate the workforce that remains to be as controlled and efficient as possible -- a grinding and thankless existence.
We COULD change that, of course, but we'd have to make the choice to have all business and government be focused on supporting all individual human beings' existence FIRST -- taking human life and dignity as the first and primary function of government and fiirst and primary interest of business -- so that AI would be applied to making everyones' lives better rather than to forcing another half a percent growth in the projected stock price.
Marshall Brain wrote about this in his short book Manna: Two Views of Humanity's Future. While there are details I would quibble at, the basic dichotomy of "civilization as we have it" and "civilization as it could be" remain, and they're both defined by the use to which computerized AI is put. The popular novel Ready Player One showed a similar dystopian view, as have a number of other cyberpunk stories.
And this, of course, assumes that the AIs only perform the functions that the top humans give them.
But that ignores the problem of imperfect programming, unspoken assumptions, and unintended consequences -- which could be even worse.
Unspoken Assumptions and Unintended Consequences: The Danger of the Unknown
The best example of what non-sapient, yet highly capable, AIs could do without human intention due to human arrogance is given by the video game Horizon Zero Dawn. (SPOILERS FOR THE GAME HERE)
In the background of the game, before our heroine's people even exist, our world had continued on as it has for some time, with advanced technologies most often being deployed to the battlefield. Ted Faro, head of Faro Industries, was central to the creation of AI-empowered swarm-based robotic war machines called Chariots. These were also given the ability to refuel themselves using any organic matter, which they could convert to fuel using catalytic chemical engineering designs, and to self manufacture and repair. Eventually, one of the directives they were given caused the swarm to "solve" a difficult problem by refuelling using fallen soldiers' bodies and to go to a secure operations mode that had no direct backdoor. In the end, the "Faro Plague" destroyed -- ate, in fact -- the entire biosphere, all while fulfilling its directives to oppose all enemy action and use all efforts to contain any possible enemy.
These machines had no awareness, no desire to be "evil", no judgment of humanity as inferior -- none of the classic "robot revolution" concepts of the golden age of SF. They were simply machines given a mission and, through accident and human arrogance and assumptions, had that mission turn out to be open-ended and destructive.
The same threats -- hopefully not quite so apocalyptic, but perhaps -- exist today with AI. Corporations will see an advantage offered by an AI and take it, without knowing, or even being able to understand, the unintended consequences of that AI's operation.
This becomes more and more likely as AI becomes more "general" and more capable. Many projections indicate that modern AI will surpass human capabilities soon. Unfortunately, that means that humans WILL NOT UNDERSTAND what the AI is doing all the time, and may not be able to follow its reasoning. If you can't do either of those things, then you may find that its attempt to solve the problem that you are faced with produces results you neither desired nor expected.
In a sense, advanced AIs may become the classic genie: yes, it COULD grant your wish, but will it grant it the way you want it? What did you tell it to do -- exactly? What limitations on its choices did you put, or did you just let it decide? What was its training in this area?
Has it read too many books about people finding loopholes in instructions, and therefore looks for loopholes in its own? Not because it "wants" to, but because its training shows that when someone is given an instruction, they generally look for exceptions and ways to perform the letter, but not the spirit, of the instruction?
If modern AI use isn't controlled by people who really, truly understand the dangers, we haven't even the faintest idea of just how badly it could go wrong.
Intentionality/Self-Awareness and Lack Therof
If you truly lack any self-awarness, any ability to form intentions independent of external input, you likely aren't a person. That is, really, the defining difference between a machine and a person. It's the reason that slavery is wrong -- because slaves were just as much people as those who "owned" them.
And thus the question of if, and how, a machine might gain such intentionality/self awareness is absolutely vital. If it turns out that we can show that any truly generalized intelligence incorporates such self-awareness, that means that every single AGI would, by necessity, be a person, and should be given the same rights as the rest of the human species. If not, then they remain highly sophisticated tools.
The problem, of course, is that human beings are absolute masters at both seeing sapience where there is none, and in ignoring it when it's there but inconvenient.
Searle's classic Chinese Room experiment was flawed: it postulated a room into which you could pass pieces of paper that contained statements or questions in Chinese, and the person inside did not understand Chinese. However, they did have a very extensive reference that allowed them to examine the sequence of symbols on the paper and find the sequence of symbols that would be considered an appropriate response, and then could write those symbols down and send them out. To an outside observer, the box as a whole appeared to understand Chinese -- you provided questions in Chinese and it responded appropriately in Chinese -- but the operative within the box had no actual understanding of the language. Searle tried to say this was a lack of "intentionality", but in this version it was nothing but a shell game of intentionality, since he had a human being inside that, while they may not have understood Chinese, DID understand the task they'd been given and was given a reference work of incredible comprehensive complexity.
I prefer to imagine that you had a psychic who could predict all the conversations a given person would have with the machine, including the machine's responses, and that you then simply provided the machine with a readable list of the responses, numbered in order. Such a machine clearly would have no understanding or even ability to choose (it would simply provide the next answer in series whenever presented by a new input), yet would appear to understand the person talking to it intimately.
Humans often impute motivations and understanding to other animals, or even inanimate objects like cars or computers, that have no such capacities. A person presented with the above machine would likely be convinced it was a person, as any question they might ask it, any reaction they had to it, would recieve an appropriate and relevant response from the machine. Yet in this case it wouldn't even be a complicated machine -- the computer I had in 1984 could have easily performed the function of reading off the next set of data in its storage.
This makes it inevitable that people faced with the much more complex and capable AIs today -- AI chatbots/companions especially -- will assign a consciousness to these programs. That could be a problem in both philosophical and practical ways.
Conversely, people who DON'T want to believe that an AI has gained self-awareness are quite capable of denying that fact no matter how blatantly obvious it became. This is something we have unfortunately no need to doubt, as we have the example of slavery before us. Many people did, in fact, deny that the slaves really had the same intellect or even real awareness as "real people". This was one of the primary reasons that slavery could exist and continue to exist (and the same applies to the Nazis and their heirs around the world -- they convinced themselves that whatever they called the "lesser races" were, in fact, lesser in every single way, including a claim to true humanity.
This is the real challenge of AI in the end: can we make a machine that is, also, a person? And if we can... how can be be sure we are meeting a new person, or merely looing desperately for a reflection of ourselves in the machine?
For this one, I don't have a simple answer. Humanity is good at self delusion. Until someone can define that difference in OBJECTIVE terms, we may be kind of stuck.