June 9, 2023

ChatGPT, an Artificial Intelligence (AI) program, was launched in November 2022. My techie nephew, as a Christmas gift, asked ChatGPT to summarize one of my blog essays, then to rebut it. The results expressed some conventional viewpoints — but with astonishingly glib articulateness.

That AI capability has captured the world’s attention. A German magazine’s purported celebrity interview proved to be an AI creation. An AI has passed the bar exam (with flying colors). If it can perform such brainwork, where will that leave humans? Some fear AI could take over the world (a TV drama, Mrs. Davis, explores that scenario) and even eliminate us. AI cognoscenti, when surveyed, rate the odds pretty low — yet uncomfortably above zero. A passel of them have called for a pause in AI development. Good luck.

The metaphor is Goethe’s The Sorcerer’s Apprentice, a trainee magician who enchants a broom to fetch water without knowing how to stop it. Similarly, philosopher Nick Bostrom conjures an AI told to maximize paper clip production, resulting in a world full of paper clips — and no humans. Another cautionary tale was the rogue computer HAL in 2001: A Space Odyssey.

So how scared should we be?

The Economist magazine recently provided a helpful look at how programs like ChatGPT actually work. They’re “Large Language Models,” trained through exposure to vast amounts of verbiage from the internet. They take a chunk of language, guess what word comes next, then check their answer against the actual text. Do this a zillion times and your guesses get pretty good.

But note that this entails no understanding of the words themselves — let alone what a sentence or paragraph means. Words are converted into numbers, the guesses then guided by how often a given number appears in proximity to certain other ones. This seemingly sterile modus operandi, with enough repetitions, does enable an AI to perform remarkable linguistic feats, like composing a respectable rhyming Shakespearean sonnet on any subject you ask (or rebutting a blog essay).

But in no sense is it “figuring out” what to say. Instead it just plonks one word after another, effectively mimicking what others have said in other relevant contexts. And that, importantly, is not creativity; not thinking.

Thinking, as humans do it (including understanding words, a very complex matter) involves thinking about our thoughts, in the context of a representational model of the world that our minds create. This requires a consciousness, a sense of self. How those arise and operate remains a profoundly vexing scientific and philosophical conundrum. But they do entail self-regarding goals and desires quite different in character from any imparted to a computer program (“make paper clips”). Thus “Artificial Intelligence” may actually be a misnomer — the output simulates that of intelligence, but the methods used don’t resemble our understanding of that word.

The movie “Her” portrayed a computer program that does possess a human-like self. We call that “General Artificial Intelligence.” But it’s miles distant, toward which ChatGPT has taken us maybe an inch.

So an AI program getting it into its head to “take over the world” simply doesn’t compute for me. That would be something radically different from the mere kind of mechanistic symbol manipulation, without even comprehending the words being used, of a program like ChatGPT.

It’s true that AIs are “black boxes,” such that even one’s programmer cannot know what steps it actually takes to produce its output. That does scare some people about an AI going rogue. But even if one did somehow become a “Her,” with human-like intentionality, or a HAL, we’d still ultimately remain in control. Remember that HAL was simply unplugged.

In sum, I don’t believe we should fear such programs themselves. But their potential for misuse by malign humans is another matter entirely.

--

--

No responses yet