Making sense of science

Let’s Talk about Robotics and AI

Let’s Talk about Robotics and AI

10.02.2019, by
Can robots replace us today or in the future? For what tasks? Will artificial intelligence surpass human intelligence? Below are some answers provided by the roboticist Jean-Paul Laumond.

We have become accustomed to using software programs that don’t work. They’re installed on our mobile phones and provide us with services, but depending on the context they also amuse or irritate us with their limitations. Face recognition software can find every photo of auntie Adele in our archives; it’s not such a big deal if a picture of an ostrich also slips in among these photos. Most of the time the Siri virtual assistant can correctly send a text containing a simple, orally dictated message. Yet we’re amused by its limited capacities when we engage in a surrealistic conversation with the software program. When driving, who hasn’t had the bad experience of useless detours, aside from those who put blind faith in their itinerary planning software, no questions asked?

While humans can make do with software programs that don’t work very well, a humanoid robot doesn’t have that freedom, for the least error will surely cause it to fall.
While humans can make do with software programs that don’t work very well, a humanoid robot doesn’t have that freedom, for the least error will surely cause it to fall.

Information to be confirmed

These software programs are dedicated to specific applications. Each one uses advances in artificial vision, speech and natural language processing, or communication networks that provide real-time road traffic conditions. They are all based on increasingly powerful algorithms that take advantage of the enhanced ability of processors to store information or compute at high speeds. They all transform data into information, whether its pixels into aunt Adele, variations in sound frequencies into text messages, or traffic jams into itineraries; this level of performance should be acknowledged.

Siri can definitely send a text message that we dictate, but being able to converse with us is another matter...
Siri can definitely send a text message that we dictate, but being able to converse with us is another matter...

We are aware that these software programs are not entirely reliable, and we make do because in the end they work fairly well, and their errors don’t have consequences. However, when specialists use these same software programs, they serve as a decision support system. Whether it’s in the field of medical diagnostics or jurisprudence, their ability to process large amounts of data is an invaluable help to the doctor or the judge. What remains is for the information produced by the software program to be confirmed by the specialist. Confusing an ostrich with aunt Adele is not as serious as confusing microcalcifications and nascent tumours in mammograms. It’s ultimately up to the doctor to establish the diagnosis and assume responsibility for it.
 
We actually make do with the limitations of these software-based technologies. We’re not looking for software programs that work without fail; they just have to be right most of the time. Information, whether correct or incorrect, is not dangerous in and of itself, but can become so as a result of the interpretation and use we make of it. In the examples used up to now, this “we” refers to humans: the doctor is the only one responsible for his or her diagnosis.

But what if this “we” referred to a physical machine? The least error committed by a software program can cause a humanoid robot to fall. What’s important is not simply using a software program that works fairly well most of the time, but of taking the physical laws of balance into account as precisely as possible. Algorithms, software programs, and the information they generate must be confirmed. This is the great challenge of robotics, to take the physical world into account based on this imperative.

Clear and precise norms

In order to meet safety requirements, the bringing to market of a robot must adhere to strict norms defined by the machine directives that govern conditions of use. They are imposed right from the design stage, and define the environment in which the machine will operate. For example, the industrial robots introduced in the 1960s automobile industry were confined to spaces in which humans could not enter. Humanoid robots like Nao or Pepper are not allowed to gather items in a supermarket aisle. These robots are simply new communication machines that enhance Siri with their expressive movements; they do not interact with the physical world beyond their ability to maintain their balance. They will surely be able to one day, in a future that is much more distant than the media or prophets of new technologies suggest.

The humanoid robot Pepper is no more than a communication machine enhanced with expressive movements. It doesn’t interact with the physical world any more than Siri does.
The humanoid robot Pepper is no more than a communication machine enhanced with expressive movements. It doesn’t interact with the physical world any more than Siri does.

Progress in robotics is slower than it may seem. Two essential factors explain this gap between technological reality and the fears and hopes sparked by it. First, roboticists must meet the constraints required for certification (how to prove that the machine does what we expect of it and nothing more?). It’s a difficult task that is based on a methodology that combines mathematical modeling and software engineering.

Second, there is the difficulty of explaining this to the public, who wants to know what is really happening. Can robots replace us today or in the future? If so, for what tasks? Will artificial intelligence surpass human intelligence? These are legitimate question that call for dialogue.

Understanding the nuances of meanings

Roboticists need representations and words to explain their discoveries and popularize them, to discuss and debate them. Like many other scientific domains, robotics borrows its words from other fields, especially from cognition and human intelligence. While action verbs are unambiguous (the robot takes the object, it walks, it paints, it welds, etc.), verbs describing aptitude (to be autonomous, to decide, etc.) run the risk of multiple meanings, as demonstrated by this (real life) exchange between a roboticist and a person attending a popular science conference on humanoid robotics.

“Thank you for your attention. I can now answer any questions you may have.”
“The videos you played showing Atlas doing a backflip, or HRP2 slipping through a gap in a wall, are impressive, and show genuine technological achievement. But what’ll happen when these machines are equipped with artificial intelligence?”
“(Surprised) Hmm, I didn’t use the expression ‘artificial intelligence’ in my presentation, but artificial intelligence is essentially what I was talking about. Robots that are capable of such feats do indeed have a form of intelligence—bodily intelligence—don’t you think? What exactly do you mean by artificial intelligence?”
“I don’t know. Maybe a machine that’s autonomous, that makes decisions.”
“Atlas and HRP2 can maintain their balance in extreme conditions thanks to software programs whose principles I explained. Can we consider them to be autonomous? When HRP2 picks a ball up off the ground or a table, can we say that it’s deciding on the movement it should make? Wouldn’t you agree?”
“Yes, I see.”
“So in repeating your words (autonomy, decision), we could conclude that these robots are truly intelligent. And I totally agree with you!”
“I see, but are they conscious?”
“Hmm, I suggest we continue this discussion at the buffet.”
 
Grasping the meaning of this dialogue represents an area of research in its own right. The recently published book Wording Robotics gathers, for the first time, the contributions made by roboticists, linguists, anthropologists, philosophers, and neurophysiologists regarding these questions. This interdisciplinary research is a precondition to any ethical consideration of robotics development.

The points of view, opinions, and analyses published in this column are solely those of the author. They do not represent any position whatsoever taken by the CNRS.

References
> J.P. Laumond, E. Danblon, and C. Pieters (eds.), Wording Robotics, Springer Tracts in Advanced Robotics 130, Springer, 2019.
> J.P. Laumond, D. Vidal, Robots, Éditions de la Cité des sciences et de l’industrie, 2019.
> J.P. Laumond, Poincaré et la robotique : les géométries de l’imaginaire, Éditions Le Bord de l’eau, 2018.

 

Share this article