We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems.
We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness.
Last year, an engineer for Google who was working on what was then called “LaMDA” (later released as “Bard”) claimed that the software had achieved consciousness. He claimed it was like a small child and he could “talk” with it.
He was fired.
Bard, ChatGPT, Baidu, and so forth are advanced chatbots built on what’s called “Large Language Models” (LLM) and can generate text in an instant.
But the programs are not AI, strictly speaking. They have no sentience.
Both Russian and Ukrainian forces are integrating traditional weapons with AI, satellite imaging and communications, as well as smart and loitering munitions, according to a May report from the Special Competitive Studies Project, a non-partisan U.S. panel of experts. The battlefield is now a patchwork of deep trenches and bunkers where troops have been “forced to go underground or huddle in cellars to survive,” the report said.
I found it interesting that many people online were commenting about Iain M Bank’s take on AI (for an in-depth analysis of his Culture series check this out on Blood Knife) and how he “predicted” all this.
Uh. You know, I’m not sure whether Banks wrote much about integrating traditional weapons with AI (since I haven’t read his series). But I do know that PK Dick wrote a short story called “Second Variety” about trench warfare and AI robots making more versions of themselves and taking over the world.
Currently, OSIRIS-REx is located at a distance of 7 million km from our planet. On September 24, OSIRIS-REx will drop a capsule with samples of asteroid matter, after which it will enter the earth’s atmosphere and land on the territory of the Utah Test and Training Range.
The tiny spacecraft launched back in 2016 and reached the asteroid Bennu in 2021.
One main reason for this mission is to find out what Bennu is made of. After the asteroid spewed out tiny “micromoons,” OSIRIS-REx successfully collected a tiny soil sample. By “tiny,” I mean less than 50 to 60 grams. And it couldn’t actually land, since the asteroid is too small to have enough gravity to support the spacecraft.
Now we have less than two weeks to find out what’s in the soil — assuming the capsule is retrieved without incident. And then OSIRIS-REx will head back out to visit yet another asteroid (Apophis) in 2029.
Yes, that famous “planet-killer” the media screamed about a few years ago as “the most dangerous asteroid in the world.” (uh. “in the world”?) It will “only” approach within 38,000 km in April 2029, but could possibly collide in 2036.
In other words, without “fresh real data” — translation: original human work, as opposed to stuff spit out by AI — to feed the beast, we can expect its outputs to suffer drastically. When trained repeatedly on synthetic content, say the researchers, outlying, less-represented information at the outskirts of a model’s training data will start to disappear. The model will then start pulling from increasingly converging and less-varied data, and as a result, it’ll soon start to crumble into itself.
So, as more and more lazy people ask AI to “write” for them, the programs get less and less accurate…
Or, as the authors of the study conclude, “…without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease.”
I.e., the use of AI-generated content to train AI doesn’t work, and since there is already way too much AI-generated garbage all over the internet, it’s almost impossible to sort out which is which when the AI-creators “scrape” data from the web.
So…
See, machines can’t replace us entirely — their brains will melt!
But then again, that might not be so hopeful after all. When AI takes over the world, maybe it won’t kill humans; perhaps it’ll just corral us into content farms…
At least we won’t wind up as batteries.
Yet.
PS. I find it both hysterically amusing and disturbing that my blog program offers an “experimental AI assistant.” Granted, the program does let you know that AI-generated content accuracy is not guaranteed, but wth would I want to use AI for a personal blog? The whole purpose of a blog is to WRITE. AI-generated text is not writing. It is intellectual property theft.
A NASA mission has observed a supermassive black hole pointing its highly energetic jet straight toward Earth. Don’t panic just yet, though. As fearsome as this cosmic event is, it’s located at a very safe distance of about 400 million light-years away.
Or, how I learned to stop worrying and love the bomb…
“During the adaptation, Stanley ran into a wall: it was impossible to make a successful film about the end of mankind since nobody, himself included, would want to see it. The answer was satire…”
Dr. Strangelove itself was an adaptation of a novel. So I wonder how adapting an adaptation to the stage will work.
How many new lines will they allow? Peter Sellers basically ad libbed everything and Kubrick rewrote the script to match the ad libs.
Probably including this one:
(Btw I hadn’t realized the film won a Hugo Award…along with many other awards…which goes to show how little we actually need Hollywood studios to get great stories in the end…)
This is the first time that has been achieved using human material. Although, they are not truly “synthetic”, as the starting material was cells cultured from a traditional embryo in the laboratory.
Great, but…
She has already developed synthetic mouse embryos with evidence of a developing brain and beating heart.
Come on, BBC. I think you can see where this is going…
Meanwhile, scientists in China have implanted synthetic monkey embryos into female monkeys – although, all the pregnancies failed.
Yep. Straight to the monkey house.
Seriously, did scientists actually think this was not going to cause a whole lot of people to get upset all over again?
Natural embryo (top), synthetic embryo (bottom). They look pretty similar…
This may indeed be a good way to study infertility causes and how embryos develop, but even the possibility of creating an embryo from a stem cell should have set off warning bells. 14-day limit or not, somebody’s going to get really tempted to do something else with them…
I’m thinking up all sorts of SciFi stories from this…
In a few billion years, our aging Sun will run out of hydrogen fuel in its core and begin to swell, eventually engulfing Mercury, Venus, and probably Earth itself. Known as the red giant phase, this is a normal step in a mid-sized star’s life cycle, when it swells to hundreds of times its usual size. There are plenty of red giants in the night sky, but astronomers have never caught one in the act of swallowing its planets — until now.
Of all the asteroids they modeled, the one with the largest risk of impact was a kilometer-wide asteroid known as 1994 PC1. Over the next thousand years, the probability that 1994 PC1 will cross within the orbit of the Moon is a paltry 0.00151%, hardly worth worrying about.
Thanks to Glen Hill over at Engagin’ Science (formerly Scientia, which apparently was far too Latin- and science-esque for search engines to handle) for bringing this (not-so Earth-shattering) info to my attention.
Sorry, folks. Hollywood was once again wrong (sigh).