The Mirage of AI: Why We Need to Stop Calling Everything “Intelligent”
The tech industry has a knack for selling us on the future, filling our minds with visions of a metaverse filled with possibilities, a web3 economy governed by blockchain, and a world powered by artificial intelligence. While all three hold a certain allure, their reality is often far removed from the lofty promises. Artificial intelligence, in particular, is a term that has captured our imaginations despite the fact that no machine can truly think. The concept itself may be one of the most successful marketing terms ever invented.
The recent announcement of GPT-4, an upgrade to the technology powering ChatGPT, has again fueled the notion of thinking machines. GPT-4 is presented as being more human-like than its predecessor, further reinforcing the idea that we are on the cusp of truly intelligent machines. However, GPT-4 and other large language models are merely mirroring vast text databases — close to a trillion words in the case of the previous model – whose sheer scale is difficult to comprehend. These models, assisted by a legion of human programmers who correct their errors, essentially combine words based on probability. This is a far cry from genuine intelligence.
These systems are adept at producing text that sounds plausible, yet they’re being marketed as new oracles of knowledge, ready to be plugged into search engines. Such claims are reckless, given that GPT-4 continues to make mistakes. Just recently, Microsoft and Alphabet’s Google both suffered embarrassing demonstrations of their new search engines, highlighting their vulnerability to factual errors.
Terms like “neural networks” and “deep learning” only contribute to the misconception that these programs are human-like. While neural networks are loosely inspired by the workings of the human brain, they are not replicas. Scientists have struggled for years to replicate the human brain with its approximately 85 billion neurons, all efforts proving unsuccessful. The closest they have come is to emulate the brain of a worm, which has a mere 302 neurons.
We urgently need a new lexicon to avoid perpetuating magical thinking about computer systems and to hold designers accountable for their creations. The term "machine learning systems" has been proposed for a while, but it lacks the same catchiness as "AI".
Stefano Quintarelli, a former Italian politician and technologist, suggested SALAMI, short for “Systemic Approaches to Learning Algorithms and Machine Inferences”. This tongue-in-cheek alternative highlights the absurdity of asking questions like: “Is SALAMI sentient?” or “Will SALAMI eventually rule over humanity?”.
Perhaps the most accurate, though ultimately hopeless, attempt at replacing “AI” is simply the word "software".
“But”, you might argue, “isn’t using a little metaphorical shorthand to describe technology that seems so magical harmless?"
The answer is a resounding no. Attributing intelligence to machines unfairly grants them independence from human control and absolves their creators of responsibility for their impact. Treating ChatGPT as “intelligent” makes us less inclined to hold OpenAI, its San Francisco-based creator, accountable for its inaccuracies and biases. It also leads to a sense of fatalistic compliance among those who suffer from harmful technological consequences. Remember, it’s not “AI” that will take your job or plagiarize your artistic creations; it’s other humans.
This issue is becoming increasingly pressing as companies like Meta Platforms, Snap, and Morgan Stanley race to integrate chatbots and text and image generators into their systems. Fueled by its rivalry with Google, Microsoft is integrating OpenAI’s language model technology, largely untested, into its most popular business applications, including Word, Outlook, and Excel. Microsoft boasts that “Copilot will fundamentally change how people work with AI and how AI works with people.”
But for customers, the promise of working with intelligent machines is misleading. Steven Poole, author of “Unspeak”, a book on the dangerous power of words and labels, says that “AI is one of those labels that expresses a kind of utopian hope rather than present reality, somewhat as the rise of the phrase ‘smart weapons‘ during the first Gulf War implied a bloodless vision of totally precise targeting that still isn’t possible."
Margaret Mitchell, a computer scientist who was fired by Google after publishing a paper criticizing the biases in large language models, has reluctantly used "AI" to describe her work in recent years. Mitchell acknowledged, “Before… people like me said we worked on ‘machine learning.’ That’s a great way to get people’s eyes to glaze over."
Her former Google colleague and founder of the Distributed Artificial Intelligence Research Institute, Timnit Gebru, also admitted that she began using "AI" around 2013, stating "It became the thing to say.”
“It’s terrible but I’m doing this too,” added Mitchell. “I’m calling everything that I touch ‘AI‘ because then people will listen to what I’m saying.”
Unfortunately, “AI” is so ingrained in our language that it’s nearly impossible to shake. At the very least, we should be mindful of the reliance of these systems on human management and hold those managers accountable for the unintended consequences.
Poole prefers to call chatbots like ChatGPT and image generators like Midjourney “giant plagiarism machines”, given that they primarily recombine prose and images originally created by human artists. "I’m not confident it will catch on," he admitted.
In more ways than one, we are stuck with “AI”, but that doesn’t mean we have to perpetuate the mirage of intelligent machines. By refusing to use the term "AI" lightly, we can begin to shed the misleading image it projects. This will ultimately lead to a more accurate understanding and a greater sense of responsibility for the technology we create and use.