Making sense of science

Is a Robot just another "Animal"?

Is a Robot just another "Animal"?

05.14.2019, by
Artificial intelligence algorithms are often seen as "black boxes" whose rules remain inaccessible. We must therefore create a new scientific discipline to understand the behaviour of the machines that rely on them, as we did for the study of animal behaviour. This is the perspective of Jean-François Bonnefon, who along with 22 other scientists just signed an editorial in the journal Nature.

Image taken from the animated film WALL-E (Andrew Stanton, 2008).
Image taken from the animated film WALL-E (Andrew Stanton, 2008).

Our social, cultural, economic, and political interactions are increasingly mediated by a new type of actors: machines equipped with artificial intelligence. These machines filter the information that we receive, guide us in the search for a partner, and converse with our children. They trade stocks in financial markets, and make recommendations to judges and police officers. They will soon drive our cars and wage our wars. If we want to keep these machines under control, and draw the most benefit while minimizing potential damage, we must understand their behaviour.

Understanding the behaviour of intelligent machines is a broader objective than understanding how they are programmed. Sometimes a machine's programming is not accessible, for example when its code is a trade secret. In this case, it is important to understand a machine from the outside, by observing its actions and measuring their consequences. Other times, it is not possible to completely predict a machine’s behaviour based on its code, because this behaviour will change in a complex manner when the machine adapts to its environment through a learning process, guided but ultimately opaque. In this case, we need to continually observe this behaviour, and simulate its potential evolution. Finally, even when we can predict a machine’s behaviour based on its code, it is difficult to predict how the machine's actions will affect the behaviour of humans (who are not programmable), and how human actions will in turn change the machine's behaviour. In this case, it is important to conduct experiments in order to anticipate the cultural coevolution of humans and machines.

A new science for observing machines

A new scientific discipline dedicated to machine behaviour is needed to meet these challenges, just as we created the scientific discipline of animal behaviour. We cannot understand animal behaviour solely on the basis of genetics, organic chemistry, and brain anatomy; observational and experimental methods are also necessary, such as studying the animal in its environment or in the laboratory.
 
Similarly, we cannot understand the behaviour of intelligent machines solely on the basis of computer science or robotics; we also need behaviour specialists trained in experimental methods from the fields of psychology, economy, political science, and anthropology.

A scientific discipline is never created from scratch. The behaviour of animals had been studied by many scientists well before the study of animal behaviour was formally established as a structured and independent discipline. Likewise, many scientists will recognise themselves in the discipline of machine behaviour once the discipline is structured and identified. Yet what's most important is for them to recognize one another, much more easily than is the case today.

By bringing together what is currently dispersed, we will enable researchers in machine behaviour to identify one another and cooperate across disciplinary boundaries. We will also make it easier for public authorities and regulatory agencies to rely on a scientific corpus that is scattered and difficult to access today, and for citizens to more clearly position themselves in a world disrupted by the emergence of intelligent machines.

We cannot perfectly predict the behaviour of robots that are continuously learning from their interactions with their environment. Jean-François Bonnefon believes we must create a science of machine behaviour in order to observe them experimentally.
We cannot perfectly predict the behaviour of robots that are continuously learning from their interactions with their environment. Jean-François Bonnefon believes we must create a science of machine behaviour in order to observe them experimentally.

That is the objective behind an appeal to researchers, public decision makers, and intelligent machine manufacturers that I recently published in the journal Nature with 22 European and American co-authors, including computer scientists, sociologists, biologists, economists, engineers, political scientists, anthropologists, and psychologists, who serve as researchers in public research organizations or universities, or work for companies like Microsoft, Facebook, or Google, the giants of artificial intelligence. We examine broad questions that ground the field of machine behaviour, inspired by the questions that grounded the field of animal behaviour.

How is behaviour fashioned, and how does it evolve?

One major question involves the social and economic incentives that shaped the behaviour initially expected from a machine. For example, what metric did an information filtering algorithm on social media initially attempt to maximize, and what are the unexpected psychosocial effects of this initial objective?

Other major questions include the following: what mechanisms were used to acquire and modify behaviour? For instance, on what type of data was a predictive police algorithm initially tested on? If such data was biased against a particular social group, is the algorithm capable of amplifying this bias through its decisions, thereby becoming a part of a spiral of injustice?
 
Identifying the environment in which a behaviour can be maintained or spread, or the one in which it is destined to disappear, is also one of the larger questions we have explored. For example, can an open archive of autonomous cars algorithms enable the programming of a car model to spread quickly to all other models, before any particular problem can be detected by the regulator?

All of these explorations must be broken down to the level of an isolated machine, a machine interacting with other machines, and hybrid collectives formed by humans and machines. They are all essential, yet today they are studied in dispersed fashion by communities that struggle to recognize one another. Bringing these communities together under the umbrella of the new science of machine behaviour will be a decisive step for meeting the challenges of a world pervaded by artificial intelligence.

The analysis, views and opinions expressed in this section are those of the authors and do not necessarily reflect the position or policies of the CNRS.

Comments

0 comment
To comment on this article,
Log in, join the CNRS News community