We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems.
We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness.
https://theconversation.com/why-chatgpt-isnt-conscious-but-future-ai-systems-might-be-212860
Last year, an engineer for Google who was working on what was then called “LaMDA” (later released as “Bard”) claimed that the software had achieved consciousness. He claimed it was like a small child and he could “talk” with it.
He was fired.
Bard, ChatGPT, Baidu, and so forth are advanced chatbots built on what’s called “Large Language Models” (LLM) and can generate text in an instant.
But the programs are not AI, strictly speaking. They have no sentience.
The classic way of determining consciousness (Alan Turing’s so-called “Imitation Game,” which actually was simply a mathematical model and not an actual machine – I’ve written about this before) no longer works. Chatbots have already pushed the “imitate a human conversation” well past what Turing thought possible in the 1940s and 1950s.
So how to determine true AI?
Several computational theories that are too complicated for me to adequately summarize in a single blog post:
Computational higher-order theories
Predictive processing (also called “predictive coding”)
ChatGPT is pretty impressive at predictive processing.
But it utterly fails at agency. It has no ability to choose. It has no goals. It is not sentient.
It is not AI.
Sorry.
It’s also inherently racist, as well as illegal thanks to scouring random copyrighted text, images, and videos by those who did not give permission and did not get paid by a company that was given millions and millions of dollars to continue illegally using online data.
But it is not AI.
But is AI coming?
I really, honestly, hope not. People obviously don’t use enough of their brains as it is.