The story of the last 20 years of pop culture is, in many ways, the Victory Of The Nerd: Comic book films, gaming adaptations, the general adoption of deeply nerdy genre trappings like time loop stories, superheroes, and more, all making billions of dollars at the box office as geek obsessions infiltrate the body mainstream.
“It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion,” he added, presumably referring to the singularity.
Astronomers have discovered what may be the brightest object in the universe, a quasar with a black hole at its heart growing so fast that it swallows the equivalent of a sun a day.
A study published this weekend in the journal Monthly Notices of the Royal Astronomical Societyproposes that the oldest star in the Milky Way is a faint white dwarf that is about 10.7 billion years old and shining roughly 90 light years away from Earth.
We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems.
We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness.
Last year, an engineer for Google who was working on what was then called “LaMDA” (later released as “Bard”) claimed that the software had achieved consciousness. He claimed it was like a small child and he could “talk” with it.
He was fired.
Bard, ChatGPT, Baidu, and so forth are advanced chatbots built on what’s called “Large Language Models” (LLM) and can generate text in an instant.
But the programs are not AI, strictly speaking. They have no sentience.
Both Russian and Ukrainian forces are integrating traditional weapons with AI, satellite imaging and communications, as well as smart and loitering munitions, according to a May report from the Special Competitive Studies Project, a non-partisan U.S. panel of experts. The battlefield is now a patchwork of deep trenches and bunkers where troops have been “forced to go underground or huddle in cellars to survive,” the report said.
I found it interesting that many people online were commenting about Iain M Bank’s take on AI (for an in-depth analysis of his Culture series check this out on Blood Knife) and how he “predicted” all this.
Uh. You know, I’m not sure whether Banks wrote much about integrating traditional weapons with AI (since I haven’t read his series). But I do know that PK Dick wrote a short story called “Second Variety” about trench warfare and AI robots making more versions of themselves and taking over the world.
Currently, OSIRIS-REx is located at a distance of 7 million km from our planet. On September 24, OSIRIS-REx will drop a capsule with samples of asteroid matter, after which it will enter the earth’s atmosphere and land on the territory of the Utah Test and Training Range.
The tiny spacecraft launched back in 2016 and reached the asteroid Bennu in 2021.
One main reason for this mission is to find out what Bennu is made of. After the asteroid spewed out tiny “micromoons,” OSIRIS-REx successfully collected a tiny soil sample. By “tiny,” I mean less than 50 to 60 grams. And it couldn’t actually land, since the asteroid is too small to have enough gravity to support the spacecraft.
Now we have less than two weeks to find out what’s in the soil — assuming the capsule is retrieved without incident. And then OSIRIS-REx will head back out to visit yet another asteroid (Apophis) in 2029.
Yes, that famous “planet-killer” the media screamed about a few years ago as “the most dangerous asteroid in the world.” (uh. “in the world”?) It will “only” approach within 38,000 km in April 2029, but could possibly collide in 2036.
In other words, without “fresh real data” — translation: original human work, as opposed to stuff spit out by AI — to feed the beast, we can expect its outputs to suffer drastically. When trained repeatedly on synthetic content, say the researchers, outlying, less-represented information at the outskirts of a model’s training data will start to disappear. The model will then start pulling from increasingly converging and less-varied data, and as a result, it’ll soon start to crumble into itself.
So, as more and more lazy people ask AI to “write” for them, the programs get less and less accurate…
Or, as the authors of the study conclude, “…without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease.”
I.e., the use of AI-generated content to train AI doesn’t work, and since there is already way too much AI-generated garbage all over the internet, it’s almost impossible to sort out which is which when the AI-creators “scrape” data from the web.
So…
See, machines can’t replace us entirely — their brains will melt!
But then again, that might not be so hopeful after all. When AI takes over the world, maybe it won’t kill humans; perhaps it’ll just corral us into content farms…
At least we won’t wind up as batteries.
Yet.
PS. I find it both hysterically amusing and disturbing that my blog program offers an “experimental AI assistant.” Granted, the program does let you know that AI-generated content accuracy is not guaranteed, but wth would I want to use AI for a personal blog? The whole purpose of a blog is to WRITE. AI-generated text is not writing. It is intellectual property theft.
A NASA mission has observed a supermassive black hole pointing its highly energetic jet straight toward Earth. Don’t panic just yet, though. As fearsome as this cosmic event is, it’s located at a very safe distance of about 400 million light-years away.