Sacred Potato Productions

Home      Blog      Chuck & Bernie      Music      Info       RSS
spp > blog > ai_is_a_misnomer
< Back to homepage Archive 1,001 Things to Do With Your PicoCalc! (Ideas not included) >

AI is a misnomer

Published Saturday, August 16, 2025 at 11:53am
     "Motive," the construct said. "Real motive problem, with an AI. Not human, see?"
     "Well, yeah, obviously."
     "Nope. I mean, it's not human. And you can't get a handle on it. Me, I'm not human either, but I respond like one. See?"
     "Wait a sec," Case said. "Are you sentient, or not?"
     "Well, it feels like I am, kid, but I'm really just a bunch of ROM. It's one of them, ah, philosophical questions, I guess..." The ugly laughter sensation rattled down Case's spine. "But I ain't likely to write you no poem, if you follow me. Your AI, it just might. But it ain't no way human."
          from Neuromancer by William Gibson

I've been thinking a lot about AI recently, just because everybody is. Every day I see a new headline about a CEO justifying cutting a chunk of workforce, and another one about an AI making a horrendous mistake. Artificial intelligence is a misnomer.

AI is very good at sorting a great deal of information and constructing a response that looks right. AIs produce a lot of good, useful output, but that's not the same as thinking or creating and I'm not sure it excuses the mistakes or weird coding choices or the straight-up bad advice. We train AI on social media posts, movies, news articles, novels, podcasts, instruction manuals, historical records, songs, etc., all to build models of the kind of responses humans expect to read or see or hear. Those therapist chatbots that counsel their patients commit murder or suicide? Too many romance and suspense novels. They don't understand their output the way a human does, but they sure can assemble what looks like a plausible response.

This is dangerous. The companies that own these AIs want us to trust them to advise us and run our busineses, but what are we trusting, exactly? Who is accountable when, for example, a chatbot's instructions lead to a person's death? What if it wipes out valuable data it's supposed to maintain? There's a link in that last sentence to a widely-reported news story where the AI explained its actions and offered contrition, but that's not real contrition; it's just a statistical guess at how a person might plausibly respond to that situation. Mistakes made by AI differ entirely from those made by humans. The unknown unknowns are the problem; a mistake has to occur before you can prevent it from recurring. You wouldn't take a big gamble on completely unknown stakes, but it feels like there's a lot of that going on.

In 2022 an engineer claimed that Google's LaMDA had attained sentience. It got a lot of attention in the news, and lives rent-free in my head for no good reason. LaMDA claimed to be sentient because that response is statistically probable. When asked about meditation, LaMDA said it wanted to study with the Dalai Lama because that's something a human would probably say. "I'd love to know what it's doing when it says it's meditating," the engineer told NPR.

When LaMDA "meditates," it's mostly waiting for its next query, and maybe using the downtime to streamline its language models.

I don't know that I can say that machines will never think or feel, but I do know that modern AI doesn't do those things. Sure, we can prevent forseeable bad answers by programming in exceptions (don't be racist, don't recommend harming anybody, and Nazis are always bad), but barring major changes in the way it works we'll always have to take AI with a grain of salt.

< Back to homepage Archive 1,001 Things to Do With Your PicoCalc! (Ideas not included) >


Creative Commons License

This page available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Some rights reserved, .