The story of the last 20 years of pop culture is, in many ways, the Victory Of The Nerd: Comic book films, gaming adaptations, the general adoption of deeply nerdy genre trappings like time loop stories, superheroes, and more, all making billions of dollars at the box office as geek obsessions infiltrate the body mainstream.
“It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion,” he added, presumably referring to the singularity.
OK, so my post about a big ole spider got the most likes of any post in ten years of blogging about science.
I have so not got the zeitgeist of the 2024 blogosphere lol – anyway, thanks, all, for the “likes”! Although one person used AI to write a very meaningless comment about arachnophobia. What’s the point, man?
By the way, back to science and space stuff. I forgot to post about the Europa Clipper project back in October.
So here you go. (It’s too late to add a message, but the project obviously is going to take some time arriving there, and you can supposedly hear US Poet Laureate Ada Limón read her poem online, although I’ve had trouble with the audio lately:
“Arching under the night sky inky with black expansiveness, we point to the planets we know, we
pin quick wishes on stars. From earth, we read the sky as if it is an unerring book of the universe, expert and evident.
Still, there are mysteries below our sky: the whale song, the songbird singing its call in the bough of a wind-shaken tree.
We are creatures of constant awe, curious at beauty, at leaf and blossom, at grief and pleasure, sun and shadow.
And it is not darkness that unites us, not the cold distance of space, but the offering of water, each drop of rain,
each rivulet, each pulse, each vein. O second moon, we, too, are made of water, of vast and beckoning seas.
We, too, are made of wonders, of great and ordinary loves, of small invisible worlds, of a need to call out through the dark.”
We compared current scientific theories of what makes humans conscious to compile a list of “indicator properties” that could then be applied to AI systems.
We don’t think systems that possess the indicator properties are definitely conscious, but the more indicators, the more seriously we should take claims of AI consciousness.
Last year, an engineer for Google who was working on what was then called “LaMDA” (later released as “Bard”) claimed that the software had achieved consciousness. He claimed it was like a small child and he could “talk” with it.
He was fired.
Bard, ChatGPT, Baidu, and so forth are advanced chatbots built on what’s called “Large Language Models” (LLM) and can generate text in an instant.
But the programs are not AI, strictly speaking. They have no sentience.
Both Russian and Ukrainian forces are integrating traditional weapons with AI, satellite imaging and communications, as well as smart and loitering munitions, according to a May report from the Special Competitive Studies Project, a non-partisan U.S. panel of experts. The battlefield is now a patchwork of deep trenches and bunkers where troops have been “forced to go underground or huddle in cellars to survive,” the report said.
I found it interesting that many people online were commenting about Iain M Bank’s take on AI (for an in-depth analysis of his Culture series check this out on Blood Knife) and how he “predicted” all this.
Uh. You know, I’m not sure whether Banks wrote much about integrating traditional weapons with AI (since I haven’t read his series). But I do know that PK Dick wrote a short story called “Second Variety” about trench warfare and AI robots making more versions of themselves and taking over the world.
All artificial intelligence, all robots and Chatbots and everything else electronically-programmed by a human being, will inevitably have human bias.
Even women prefer women’s voices to men’s when it comes to customer service.
On the other hand, women are also historically relegated to work with lower pay, lower status, kept out of positions of power — subject to the “male gaze.”
Now, we have AI that can be treated as sex objects. Even “married.”
So it is all “sinister,” as BBC asks?
Creepy, maybe. Sad, perhaps. Entirely predictable, definitely.
As we continue to lead more and more isolated individual lives, cut off from human contact and left unable to socialize, the rise of the “AI companion” seems inevitable…
For all these technological “advances,” we are no better than the ancients. We are still prisoners to our emotions — or to the biological impulses of electricity and hormones whose results we deem emotive.
In other words, without “fresh real data” — translation: original human work, as opposed to stuff spit out by AI — to feed the beast, we can expect its outputs to suffer drastically. When trained repeatedly on synthetic content, say the researchers, outlying, less-represented information at the outskirts of a model’s training data will start to disappear. The model will then start pulling from increasingly converging and less-varied data, and as a result, it’ll soon start to crumble into itself.
So, as more and more lazy people ask AI to “write” for them, the programs get less and less accurate…
Or, as the authors of the study conclude, “…without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease.”
I.e., the use of AI-generated content to train AI doesn’t work, and since there is already way too much AI-generated garbage all over the internet, it’s almost impossible to sort out which is which when the AI-creators “scrape” data from the web.
So…
See, machines can’t replace us entirely — their brains will melt!
But then again, that might not be so hopeful after all. When AI takes over the world, maybe it won’t kill humans; perhaps it’ll just corral us into content farms…
At least we won’t wind up as batteries.
Yet.
PS. I find it both hysterically amusing and disturbing that my blog program offers an “experimental AI assistant.” Granted, the program does let you know that AI-generated content accuracy is not guaranteed, but wth would I want to use AI for a personal blog? The whole purpose of a blog is to WRITE. AI-generated text is not writing. It is intellectual property theft.