In the control of artificial intelligence we risk the future

Advances in artificial intelligence (AI) and biotechnology, exacerbated in the popular imagination by transhumanist discourse, have made the governance of technology an inescapable problem on the political agenda. Perhaps it no longer sounds melodramatic to say that it is a matter in which we risk the future.

We continue, however, with institutions and regulatory systems that, at best, are functional in relation to the technology of the third industrial revolution (digital and informational revolution), but that are obsolete to regulate the technologies of the fourth (union of technologies systems, particularly AI and smart systems networks, robotics, the internet of things, new material technologies, nanotechnology and biotechnologies). This revolution, according to leading analysts, has already begun.

As the philosopher Luciano Floridi explains in his book The fourth revolution, the challenge before us is not so much that of technological innovations as such, but rather that posed by the governance of digital itself. However, much of society does not seem to take this problem too seriously. Some legislators and experts are aware of the magnitude of the challenge, but there are reasonable doubts that they can exert decisive influence at the legal and institutional level with the urgency that would be required.

Is there really an artificial intelligence?

Until now, all the achievements in the field of artificial intelligence have been in the development of what is known as “particular artificial intelligence”, specific or narrow. That is, in the creation of computer systems that display a great capacity, even greater than human, to perform very specific and well-defined tasks. For example, playing a game with fixed rules (chess, go, checkers, video games), answering questions of general culture, making precise medical diagnoses (infectious diseases, cancers, personalized medicine), recognizing faces and other images, processing and interpret the human voice, translate from one language to another.

In fact, a substantial part of what we now call artificial intelligence They are data mining systems, so called because they are capable of analyzing massive amounts of data and obtaining from them unknown patterns and what we could consider as new knowledge about that data.

As impressive as these achievements are, these technologies fall short of the versatility and flexibility of human intelligence. The smartest systems we have today cannot be used effectively for tasks other than those for which they were programmed. There are those who think that we should not even call them intelligent, since the only intelligence that appears in them is that of the human programmer or that of human beings in whose social context these systems fulfill some function.

It is often said that a machine is intelligent when it is capable of performing tasks that we assume require intelligence to be carried out. This is an operational definition, since it considers that artificial intelligence is characterized as intelligent by its results. However, the characterization of intelligence itself is an old problem whose discussion continues. It is not easy to settle the question, so it is not surprising that there is also no agreement on how to define artificial intelligence itself.

Let us accept, however, that in a not merely metaphorical sense we can speak of artificial intelligence. Should we then fear the creation of a General Artificial Intelligence (AGI)? Will we have super-intelligent machines that will take control of the entire planet or will we be able to control them ourselves? These are questions that are often repeated when the future of AI is mentioned in the media and in popular books, and I think they deserve to be taken seriously.

Artificial intelligence is already a challenge

It should not be forgotten that, regardless of whether the future development of an intelligence superior to the human one could represent a danger for the survival of our species, which for the moment constitutes a challenge from the point of view of safeguarding the rights of people are certain applications of AI whose effects are already being seen, as is the case of the use of our personal data by AI systems belonging to large technology companies, whose power in turn is growing rapidly, or biases and opacity of the algorithms used in making important decisions for people’s lives, such as the hiring of personnel in companies or the granting of bank loans.

Special mention should be made of the dangers of the use of AI in the identification of faces and in the search for criminals and crime prevention, in the surveillance and repression of political dissidents, in the creation of autonomous weapons, or in the proliferation of cyberattacks, of fake news and political destabilization through disinformation.

Let us also say, in order not to leave a completely negative image, that AI is being a very effective instrument in the prosecution of financial crimes, in protecting the safety of people, in promoting biomedical progress, in achieving greater energy efficiency and environmental protection.

I believe that, to analyze the possible consequences of artificial intelligence, both favorable and unfavorable, to discuss whether it is genuine intelligence, similar to human, with the possibility of being conscious or not, is to divert the focus from the real problem.

What seems to me that we should be concerned now is not whether we will be able to create intelligence similar to human or superior, but what the machines that we create will be able to do with us in the future, if they have the capacity to make decisions that are considered in practice. as unappealable in their authority. It is not how these machines think that matters, it is how they act, since they will be agents with a certain autonomy, and, above all, how we will insert them into our social order.

What is relevant in all this will be that human beings accept without supervision the decisions that these machines make, as well as the consequences that those decisions may have on our lives, especially if the human being himself gives up control.

In short, it is necessary to promote institutions and procedures that facilitate the defense of citizens’ rights against the potential risks of artificial intelligence, such as, for example, the defense of the right to privacy, as well as to start thinking about the requirements that would be fundamental for an effective control of AI, because compared to what some tell us, there is no a priori incontestable reason to accept that the problem of AI control is unsolvable.

LEAVE A REPLY

Please enter your comment!
Please enter your name here