Should we let artificial intelligence optimize society?

The great innovation of the 21st century has been the advent of big data and deep machine learning. An example is artificial intelligence (AI). Could anyone have imagined in 1990 that a computer would defeat any human in chess or Go? The book The Emperor’s New Mind, published that same year by mathematician and Nobel Prize winner Roger Penrose, argued vigorously against this possibility. Today, anyone can translate another language from their mobile on the go.

This revolution has two components. First, the availability of huge amounts of data on all aspects of our society: our location, the frequency of our banking transactions, our preferences when we watch movies … Second, the ability of artificial intelligence to make sense of this data.

To understand the power of AI, imagine having a model for a phenomenon: if it is accurate, you can make predictions with it. For example, consider Einstein’s physical theory of general relativity. If we want to build a GPS network we need to apply this theory to develop such a device.

On the other hand, consider phenomena for which we do not have an effective model, such as social interactions and language. Until the 21st century it was believed that without a model nothing could be predicted. However, the neural network solves this by looking for and using patterns and correlations in the data. Namely: we don’t have a clear idea of ​​what language is, but we have almost perfect algorithms for translating languages. We also don’t have a theory about people’s moviegoer tastes, but streaming they advise us movies that we may like.

Given AI’s ability to manage and make sense of any amount of data, why don’t we let AI optimize society? Sound heresy? A concrete case: The global management of the COVID-19 pandemic has not been optimal by the governments of the world.

While almost everyone recognizes the paramount importance of scientific data, and some have tried to use it in the most efficient way, others simply have not been able to digest what the data said. Even the most successful country in vaccinating its citizens, Israel, has been plagued by massive mismanagement and mismanagement of the pandemic. Other countries have erratic or populist vaccination campaigns; none have been able to optimize their strategy. We are playing with human lives. Could AI have managed it better?

The way AI learns patterns and takes advantage of correlations found in massive amounts of data is by minimizing one set of aspects and quantities and maximizing others. In technical language, optimizing a cost function (cost function). For example, in the case of a bank, AI provides the optimal way to maximize profit return.

No country has tried to use AI to decide who should get vaccinated and when. The use of algorithms does not necessarily imply better decision making. A poorly designed one will always give poor results: the one tested at Stanford Medical Center (USA) left out the doctors who faced the pandemic in the front line and put some who worked from home ahead.

Can algorithms overcome our biases?

The next step would be to argue that perhaps AI should not only optimize society’s responses to pandemics, but other aspects of our lives as well. Why not? After all, in most cases, politicians are trying to optimize their options: how many roads to build? How much to pay in the national health or pension system? How to make the economy greener?

It could be argued that these are political decisions in which “quantitative” and concrete data and facts play no role. In reality, these decisions are made by maximizing a cost function that will inevitably be skewed by human factors and interests such as the enrichment of the politician and his constituents. For example, the policy of the pork barrel, that which refers to the contribution of public money that members of Congress and Senators of the United States have at their disposal to finance projects of local interest, and that is frequently used to win votes.

Ideally, in a democracy (even robotized), the cost function should optimize the welfare of the society and all its members. An advantage of algorithms is that they can remove human biases from the decision-making process. So why not give algorithms the ability to manage and digest data, proposing optimal solutions from an objective point of view?

With one important reservation: biases and biases can be embedded in the algorithm itself. Hiding behind AI does not guarantee ethical and egalitarian decision making – ethics have to be built into the algorithm.

Can we come to think that the whole process of optimizing society can be done, much more efficiently, by an algorithm? Is it possible to build an ethical AI? How? Who controls the algorithms and the cost function? In what cases and under what conditions could humans completely leave decision-making to algorithms? When Should Humans Intervene?

These are the key questions facing contemporary society: we should answer them as soon as possible while still controlling the robot.


Please enter your comment!
Please enter your name here