The news that two robots made by Facebook had started speaking a language unknown to us created panic, but it’s not completely true
Bots that refuse to obey their creators and create panic on the planet are full of the world of science fiction, from movies to books. When researchers at Facebook noticed that two of their robots had begun to speak a language unknown to us, however, the fear became real.
But don’t panic. Many news about it have added tension on the topic, talking about researchers forced to “kill” the bots because they are now out of control. Also because that’s not exactly what happened. Facebook in recent times was working on two simple robots that were used to chat and interact. To tell the truth, these are not even the most efficient chatbots ever seen in the world. So there is no reason to think that these two robots could be dangerous. We must then understand that artificial intelligence and machine learning will soon be present in every area of everyday life, from work to justice and even in warfare. And we have to adapt to this change.
What happened in Facebook’s lab
Specifically inside Facebook’s labs, researchers were trying to make new chatbots to be able to interact with users. The idea was to equip companies’ business pages with these chatbots to facilitate assistance with consumers. The researchers started by teaching the two bots about categories of items, such as hair, books, etc. In addition, the bots were taught a specific language that was appropriate for the business environment. At this point, the bots started to interact with each other through negotiations and games, taking advantage of a technique called reinforcement learning. It was during this that the two bots stopped using recognizable phrases, leading to divergences from human language.
Why did this happen
Why did this happen? The answer is quite simple: the experiment failed. Basically the robots don’t want to kill the human race but simply for some machine learning errors they have understood that to express a concept they can use repetitions of words. Repetitions that make incomprehensible to us what they say. The mistake of the programmers was to set as objective the winning of a simple game between robots but they didn’t teach the bots to do it using only English, so the automata used “their” language to win. And to be honest it’s not even the first time something like this has happened. In the past also Elon Musk in his OpenAI project had similar problems. This only underscores the progress that must be made in the field of artificial intelligence to create robots that can interact with people. In short, for now Terminator remains just a movie.