“Balls have zero to me to me to me to me to me to me to me to me to.”
–Alice, AI Client
Crammed between all the palace intrigue and reality-show gossip that dominates the news cycle is word that an artificial intelligence system being developed at Facebook just invented its own language. That’s not Terminator-level uh-oh material, but neither is it inconsequential; it was deemed “serious” enough by the Facebook researchers to shut the AI down.
It all started when two AI agents began using strange word combinations in a conversation. It might have seemed like “nonsense speak,” but it was actually AI “shorthand” for words in English that they found a bit imprecise and clunky:
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
Each of those repeated I’s and me’s represent specific quantities of things, rather than using numbers; it was just easier for the AI to understand, so they said “to hell with it” and started modifying language in a manner that worked for them.
Once the researchers realized what was going on, they shut that down pronto. And though they didn’t seem to go out and just say that this type of thing could result in Skynet somewhere down the line…you get the impression, because of how quickly it was all shut down, that it wasn’t cool.
This isn’t the first time artificial intelligence has experimented with creating its own language; Elon Musk’s OpenAI lab has been encouraging AI to do exactly this. But the recent Facebook incident illustrates that this type of “spontaneous language” just seems to sprout when AI have ever-complex interactions with each other.
How to prevent this in the future? And does the “squashing” of these seemingly organic new languages cause something of a “chilling effect” with the AI? What I mean to say is: how long can humans “frustrate” their desire to essentially map out and direct their own existence?