M Thomas Apple Author Page

Science fiction, actual science, history, and personal ranting about life, the universe, and everything

ChatGPT is frighteningly good at writing literary analysis…

August 5, 2025
MThomas

🧾 Conclusion

Adam’s Stepsons takes the core questions of Blade Runner and distills them into a tight, character-driven drama. It lacks the sweeping visuals of Villeneuve or the noir cityscape of Scott — but it delivers something arguably more intimate:

A quiet horror — and quiet triumph — in the collapse of identity, where the artificial doesn’t just mimic life…

It replaces it.


Over the weekend (my first with no student work to grade — finally! — since April), I decided to ask our “old” friend ChatGPT if it could analyze my sci-fi novella Adam’s Stepsons. Really, I was just curious what it would say.

It said…a LOT.

It correctly interpreted the title (something that many readers apparently didn’t get). It correctly identified the main themes as part of a “post-humanism” sub-genre of science fiction. And once I gave it three short excerpts (from the near the end of the story), it gave a frighteningly accurate thematic and symbolic analysis of the entire novella…just from three short excerpts of a total of about six pages.

I won’t copy all it gave me (you all can go try on your own and see what it says!). But let me share what the program thought were key themes:

Continue Reading

Asking AI to be more like Spock is only, well…

March 4, 2024
MThomas

For one of the models, asking the AI to start its response with the phrases “Captain’s Log, Stardate [insert date here]:.” yielded the most accurate answers.

“Surprisingly, it appears that the model’s proficiency in mathematical reasoning can be enhanced by the expression of an affinity for Star Trek,” the researchers wrote.

https://qz.com/ai-chatbots-math-study-star-trek-1851301719

When AI becomes actual AI…

October 26, 2023
MThomas

We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems. 

We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness. 

https://theconversation.com/why-chatgpt-isnt-conscious-but-future-ai-systems-might-be-212860

Last year, an engineer for Google who was working on what was then called “LaMDA” (later released as “Bard”) claimed that the software had achieved consciousness. He claimed it was like a small child and he could “talk” with it.

He was fired.

Bard, ChatGPT, Baidu, and so forth are advanced chatbots built on what’s called “Large Language Models” (LLM) and can generate text in an instant.

But the programs are not AI, strictly speaking. They have no sentience.

Continue Reading

AI “loses its mind” when fed AI-created drivel

August 19, 2023
MThomas

Source: https://twitter.com/tomgoldsteincs/status/1677439914886176768 (@tomgoldsteincs)

In other words, without “fresh real data” — translation: original human work, as opposed to stuff spit out by AI — to feed the beast, we can expect its outputs to suffer drastically. When trained repeatedly on synthetic content, say the researchers, outlying, less-represented information at the outskirts of a model’s training data will start to disappear. The model will then start pulling from increasingly converging and less-varied data, and as a result, it’ll soon start to crumble into itself.

https://futurism.com/ai-trained-ai-generated-data

So, as more and more lazy people ask AI to “write” for them, the programs get less and less accurate…

Or, as the authors of the study conclude, “…without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease.”

I.e., the use of AI-generated content to train AI doesn’t work, and since there is already way too much AI-generated garbage all over the internet, it’s almost impossible to sort out which is which when the AI-creators “scrape” data from the web.

So…

See, machines can’t replace us entirely — their brains will melt!

But then again, that might not be so hopeful after all. When AI takes over the world, maybe it won’t kill humans; perhaps it’ll just corral us into content farms… 

At least we won’t wind up as batteries.

Yet.

PS. I find it both hysterically amusing and disturbing that my blog program offers an “experimental AI assistant.” Granted, the program does let you know that AI-generated content accuracy is not guaranteed, but wth would I want to use AI for a personal blog? The whole purpose of a blog is to WRITE. AI-generated text is not writing. It is intellectual property theft.

The real danger of unregulated AI

February 27, 2023
MThomas

“I’m less frightened by a Sydney that’s playing into my desire to cosplay a sci-fi story than a Bing that has access to reams of my personal data and is coolly trying to manipulate me on behalf of whichever advertiser has paid the parent company the most money.

“Nor is it just advertising worth worrying about. What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,” Gary Marcus, the A.I. researcher and critic, told me. “I think that’s already been a problem for society over the last, let’s say, decade. And I think it’s just going to get worse and worse.”

https://www.nytimes.com/2023/02/26/opinion/microsoft-bing-sydney-artificial-intelligence.html

ChatGPT: Is this really the “death of the essay”?

December 17, 2022
MThomas

I’ve been testing ChatGPT over the last couple of days. (If you don’t know what this chatbot is, here’s a good NYT article about ChatGPT and others currently in development.)

The avowed purpose of ChatGPT is to create an AI that can create believable dialogues. It does this by scouring the web for data it uses to respond to simple prompts.

By “simple,” I mean sometimes “horribly complicated,” of course. And sometimes a little ridiculous.

Somehow, I doubt that people in the US said “livin’ the dream” in the ’50s…

As has been pointed out, chatbots only generate texts based on what they have been fed, i.e., “garbage in / garbage out.” So if you push the programs hard enough, they will generate racist, sexist, homophobic etc awful stuff — because unfortunately that kind of sick and twisted garbage is still out there, somewhere online in a troll’s paradise.

So far, I have asked the program to:

  1. Write a haiku about winter without using the word “winter”
  2. Write a limerick about an Irish baseball player
  3. Write a dialogue between God and Nietzsche (I just had to…)
  4. Imagine what Jean-Paul Sartre and Immanuel Kant would say to each other (see above) but using US ’50 slang
  5. Have Thomas Aquinas and John Locke argue about the existence of God (that one was fun)
  6. Write a 300 word cause-effect essay about climate change
  7. Write a 300 word compare and contrast essay about the US and Japan
  8. Write a 1000 word short science fiction story based on Mars
  9. Write a 1500 word short science fiction about robots in the style of Philip K Dick

OK, and the verdict is:

Continue Reading

Chatbots — Still not AI but still dangerous

December 13, 2022
MThomas

[ChatGPT] could teach his daughter math, science and English, not to mention a few other important lessons. Chief among them: Do not believe everything you are told.

https://www.nytimes.com/2022/12/10/technology/ai-chat-bot-chatgpt.html

They’re all the rage online. Type in a request for a description how two historical people who never actually met would respond to each other had they actually met, and the program will oblige.

They’ll cause all sorts of rage online, too, once the peddlers of incessant false news and innuendo realize what a bonanza they’ve stumbled upon.

You want an image of an event that never really happened?

No problem. A program can generate one for you. We can even call it “art,” for what that’s worth.

No, BIG problem, especially when it convinces the gullible that it DID happen.

2023 will tell 2020 and 2022 to hold its coffee.

Just what we all wanted, right?

Still, chatbots are not (repeat, NOT) true AI. Sorry, Google engineer who watched too much Ghost in the Shell. Chatbots repeat our very human bias. Repeatedly.

As in, there are way too many racist, sexist, xenophobic, homophobic, and transphobic comments online. Full stop.

At a minor level, as a writing instructor, a student telling a chatbot to write a 600-word comparison-contrast essay is the least of my worries.

For starters, the damn things are probably scouring the Internet right now and “learning” from text on web pages like…uh…this one…

😱

Blog at WordPress.com.
The Silmaril Chick

Writing Fanfiction in the worlds of Tolkien and Beyond!

Our Awesome Universe

Learning more about our place in the universe...

TechWordly

Best Tech Gadgets Advise

Weird Science Marvel Comics

Marvel Comics Reviews, Previews and News

Universe discoveries

Writing blogs is miracle I am a writer blogger and my site mission is to give information on maximum information to audiences

Robby Robin's Journey

Reflections of an inquiring retiree ...

Fox Reviews Rock

Rock & Metal Reviews That Hit Hard

My little corner of the world

Short stories | Reflections | Poetry

DimmaJo Blog

Read | Reflect | Grow