X

We value your privacy

We and our partners use technology such as cookies on our site to personalise content and ads, provide social media features, and analyse our traffic. Click below to consent to the use of this technology across the web. You can change your mind and change your consent choices at anytime by returning to this site.

Evolving Science

Facebook Chatbots Shut Down After Inventing Their Own Language

In a scenario which could easily be out of a science-fiction movie, two Facebook chatbots powered by Artificial Intelligence have been shut down, after they started to communicate in their own language, which researchers working on them could not translate.

Researchers at the Facebook AI Research Lab (FAIR) paired two of their chatbots, named Alice and Bob, with the intention of helping them learn how to improve their communication skills. But the experiment took an unexpected and slightly surreal turn for the worse, when the chatbots seemingly started to invent their own language in order to communicate with each other.

Nothing to panic about

Facebook defended the experiment, and subsequent decision to shut down the interaction between Bob and Alice, by saying that it marked a valuable step towards “creating chatbots that can reason, converse and negotiate, all key steps in building a personalized digital assistant.”

It’s thought that because there was no reward for the chatbots to continue conversing in English, they quickly developed a shorthand language in order to complete their task, which was bartering and trading a number of objects.

Staff at the FAIR lab said the conversations between the chatbots led to “divergence from human language as the agents developed their own language for negotiating.” Whilst this in itself is not really cause for concern, it’s the potential consequences of incidences such as this which are concerning some researchers in this field.

But caution is necessary

Others have previously warned Facebook creator Mark Zuckerberg of the potential dangers of Artificial Intelligence. Elon Musk, of Tesla, warned that: ““AI is a rare case where we need to be proactive in regulation instead of reactive because if we’re reactive in AI regulation it’s too late.”

Musk also said: “Normally the way regulations are set up is a whole bunch of bad things happen, there’s a public outcry and then after many years the regulatory agencies set up to regulate that industry. “That in the past has been bad but not something that represented a fundamental risk to the existence of civilisation. AI is a fundamental risk to the existence of civilisation in a way that car accidents, aeroplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals but they were not harmful to society as a whole.”

Whilst Musk has been criticized of scaremongering by some, for his vocal warnings about the potential dangers of AI, other scientists also share his concerns, including Stephen Hawking.

Hawking, speaking at the opening of The Leverhulme Centre for the Future of Intelligence, suggested that whilst AI is sure to bring great benefits, it “will also bring dangers, like powerful autonomous weapons or new ways for the few to oppress the many”. He went on to say: “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not know which.”

Top image: Facebook engineers pull the plug on AI bots (goodworklabs)