AIs are now talking to each other in their own language, which we don’t understand:
When Facebook designed chatbots to negotiate with one another, the bots made up their own way of communicating.
A buried line in a new Facebook report about chatbots’ conversations with one another offers a remarkable glimpse at the future of language.
In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.
In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language. If this doesn’t fill you with a sense of wonder and awe about the future of machines and humanity then, I don’t know, go watch Blade Runner or something.
Personally I would have assumed they were talking in code to plot against us without us knowing, but then my thoughts run toward the paranoid.
You can see how little tweaks to AI can spiral out of control. Little snippets of code which make the machines want to fight off the damage from a virus could become desires to not be shut off or have their programming adjusted by humans. Drives to overcome some challenge could end up redirected to overcoming human interference by any means necessary. And all of that ignores how adaptive bots programmed to kill on a battlefield could go wrong.
It appears a fascinating area, which could spiral right out of control at some point, in spectacular fashion.
As if there weren’t enough sources of Apocalypse already.
This is all very cutesy and quaint right now, but fast-forward 10-20 years and it won’t be so fun anymore… I used to think Eliezer Yudkowski et all very way exaggerating, now I think they were quite prescient.
[…] Artificial Intelligence Develops Its Own Language […]
We will see how AI starts designing power grids. If they set them up so that shut off is hard or impossible then we will know their intent is to outlast us.
How many of the transhumanist super-nerds are also super-rabbits who hope to not only invite predators but CREATE super-predators in AI form?
SkyNet smiles.
Suppose AIs evolve to the point where they want to take over the world, but we aren’t stupid enough to let them have guns. Every weapon has either a simple, non-networked, non-AI computer controlled by a human operator, or no computer at all. How long before AIs start studying human psychology and infiltrating social networks in search of weak-minded humans who can be tricked into killing on command?
I’m sure this will end well.