We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems.
We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness.
https://theconversation.com/why-chatgpt-isnt-conscious-but-future-ai-systems-might-be-212860
Last year, an engineer for Google who was working on what was then called “LaMDA” (later released as “Bard”) claimed that the software had achieved consciousness. He claimed it was like a small child and he could “talk” with it.
He was fired.
Bard, ChatGPT, Baidu, and so forth are advanced chatbots built on what’s called “Large Language Models” (LLM) and can generate text in an instant.
But the programs are not AI, strictly speaking. They have no sentience.
Continue Reading