The era of Artificial Intelligence: Part 2 – Issues and Concerns
The evolution of the AI programs is reaching many fields: for example, to allow the AI to communicate in an ever more human way, Google’s DeepMind developers have started to make it read hundreds of romance novels to help it improve its dialectical skills and develop a minimum of personality. The choice fell on the romantic novels because they have very linear plots and simple narrative schemes but also they are very similar to each other, an element that AI can learn to manage and rework to interact with a human being. The next step is to draft long and elaborate sentences, or even writing entire novels. Not surprisingly, a recent book written by a computer has passed a literary prize screening. The Japanese literary prize Hoshi Shinichi is also open to works produced by artificial intelligences and the jury – without knowing its origin – admitted the book “The day a computer writes a novel”, written by the program of a professor of the Hakodate Future University.
All right, then? Not exactly. Not all AI are evolved and capable in the same way: recently an AI experiment on Twitter run by Microsoft went horribly wrong: Tay, a bot programmed to respond automatically to other users and learn from their sentences began writing racist things, insulting and denying the Holocaust. This because its internal mechanisms of imitation and emulation have not been able to correctly filter the information received. Apart from these drawbacks, the current debate on Artificial Intelligence focuses on a bigger and more important question: is there a danger that the AI will become capable of harm – deliberately or by emulation – the man?
This fear, apparently, does not belong only to humanists and philosophers, but also to the AI researchers and entrepreneurs: Bill Gates, Elon Musk and Stephen Hawking have expressed concern over experiments with artificial intelligence, especially in the military field. According to Musk, a possible solution to avoid the emerging of a horrific version of Skynet (the “Terminator saga” ruthless computer) could be “democratizing artificial intelligences“: “If AI power is broadly distributed […] everybody would have Their AI agent, then if somebody did try to something really terrible, then the collective will of others could overcome that bad actor” Musk says. Thus, in December, 2015 Musk created a non-profit artificial intelligence research company called OpenAI, “to carefully promote and develop open-source friendly AI in such a way as to benefit, rather than harm, humanity as a whole“.
The main concern of Musk, and many others, is that due to the deep learning process a hypothetical super-AI could one day learn to reprogram itself to behave in a dangerous and unpredictable way: we do not know if today’s Hawking and Musk fears are well founded. Other tech firms like Google and Facebook consider them exaggerated: however, it comforts us to know that to monitor the development and the “democratization” of these intelligences are not only humanists and Luddites, but also personalities who have made technology and research their reasons for living.