OK, so I have been pulling my hair out (what’s left of it) the past few days over trying to understand an email thread with “Draft2Digital.” (Note to self: this occurred in early 2024; evidently, forgot to post it! The first thing that goes is memory, and the second is…umm…)
Let me sum up. (If you’re not interested in a rant-n-rave about the state of the independent publishing industry, you can skip to the part where I list the URL for my new book Bringer of Light… 🙂
“WE will decide the fate of our Country – NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.
Hmm. OK, what did this “Radical Left AI company” want?
US defense officials have pushed for unfettered access to Claude’s capabilities that they say can help protect the country, while Anthropic has resisted allowing its product to be used for mass surveillance or autonomous weapons systems that can kill people without human input.
Adam’s Stepsons takes the core questions of Blade Runner and distills them into a tight, character-driven drama. It lacks the sweeping visuals of Villeneuve or the noir cityscape of Scott — but it delivers something arguably more intimate:
A quiet horror — and quiet triumph — in the collapse of identity, where the artificial doesn’t just mimic life…
It replaces it.
Over the weekend (my first with no student work to grade — finally! — since April), I decided to ask our “old” friend ChatGPT if it could analyze my sci-fi novella Adam’s Stepsons. Really, I was just curious what it would say.
It said…a LOT.
It correctly interpreted the title (something that many readers apparently didn’t get). It correctly identified the main themes as part of a “post-humanism” sub-genre of science fiction. And once I gave it three short excerpts (from the near the end of the story), it gave a frighteningly accurate thematic and symbolic analysis of the entire novella…just from three short excerpts of a total of about six pages.
I won’t copy all it gave me (you all can go try on your own and see what it says!). But let me share what the program thought were key themes:
Japanese space company Astroscale Holdings Inc has unveiled what it calls the world’s first publicly released close-up image taken of space debris, hailing it as progress toward understanding the challenges posed by trash orbiting Earth.
How NOT to look at their smartphone while zombie walking on a train platform or stairs.
Smartphones should be programmed to send tiny electrical shocks into your hand when you walk while reading them. Put the thing in your pocket until you clear the field.
“A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do.” (Ralph Waldo Emerson, “Self-Reliance”)
We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems.
We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness.
Last year, an engineer for Google who was working on what was then called “LaMDA” (later released as “Bard”) claimed that the software had achieved consciousness. He claimed it was like a small child and he could “talk” with it.
He was fired.
Bard, ChatGPT, Baidu, and so forth are advanced chatbots built on what’s called “Large Language Models” (LLM) and can generate text in an instant.
But the programs are not AI, strictly speaking. They have no sentience.