“It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion,” he added, presumably referring to the singularity.
We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems.
We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness.
Last year, an engineer for Google who was working on what was then called “LaMDA” (later released as “Bard”) claimed that the software had achieved consciousness. He claimed it was like a small child and he could “talk” with it.
He was fired.
Bard, ChatGPT, Baidu, and so forth are advanced chatbots built on what’s called “Large Language Models” (LLM) and can generate text in an instant.
But the programs are not AI, strictly speaking. They have no sentience.
Using blobs of skin cells from frog embryos, scientists have grown creatures unlike anything else on Earth, a new study reports. These microscopic “living machines” can swim, sweep up debris and heal themselves after a gash.