Sections

Does AI care about us?

Does AI care about us?

10.28.2025, by
Reading time: 7 minutes
In The Residence, Clarissa, a novelist in search of inspiration, takes part in an artists’ residency programme that includes the help of a virtual assistant called Dalloway. But the AI system’s behaviour becomes more and more intrusive…
In the French-Belgian film "The Residence", recently released in a number of European countries and Brazil, a novelist interacts with an artificial intelligence system. The robotics researcher Catherine Pélachaud explains how a seemingly personal but actually one-way relationship can arise between humans and AI.

The film The Residence explores a personal and ambiguous relationship between humans and artificial intelligence (AI). How can the latter be endowed with a personality?

Catherine Pélachaud1: An artificial personality is defined primarily through the behaviour patterns assigned to the machine that enable it to be perceived as sympathetic or attentive, for example. This goes beyond mere verbal language – non-verbal signals like smiles, gestures and facial expressions can be integrated, giving the interaction greater presence and fluidity. In some cases this simulation is taken to the extent of including imperfections like hesitations and minor errors, to enhance the impression of authenticity.

The goal is to make the AI system’s behaviour and exchanges convey human capacities for empathy and understanding, even though the machine doesn’t actually ‘feel’ anything – it is simply analysing and calculating its response based on algorithms and training data.

Can this humanisation pose risks for the users, especially in terms of emotional attachment?

C. P.: Absolutely. Several studies have reported that people speak more freely to AI systems, because they don’t feel constrained by judgments or social restrictions. This phenomenon reflects a kind of artificial empathy: the machine’s adapted response creates a bond, a sense of psychological comfort, but also an ambiguity.

If it is not handled properly, this emotional closeness can lead to dependency, a confusion between the algorithmic material and a real living partner. The Residence highlights this aspect when the AI system adopts the voice of a missing child to appeal to the protagonist (played by Cécile de France). This type of emotional intrusion reveals the power and the danger of a digital entity that knows how to take advantage of human vulnerability.

What about the consequences of integrating this type of AI and neuromarketing in business strategies?

C. P.: Neuromarketing is the direct application of neuroscience combined with artificial intelligence to perceive and exploit consumers’ emotional reactions. Modern-day technologies make it possible to analyse various indicators in real time: facial expressions, tone of voice, heart rate, and even brain activity. This opens the door to adaptive, hyper-personalised advertising that modifies the message according to the detected emotional state of the viewer.

What seemed like science fiction just a few years ago is becoming reality. We now have tools that can circumvent full awareness of the purchase decision. There is a great risk of undermining the cognitive sovereignty of the individual, since these techniques can exert an influence with no transparency or clear consent, for example during a job interview.

More and more people are developing emotional relationships, up to and including symbolic marriages, with artificial intelligence systems. What does this phenomenon indicate?

C. P.: This is no longer just anecdotal. In some countries, especially in Asia, people are forging very strong bonds with virtual entities, to the point of considering formal unions. This type of AI is often highly personalised: users define the entity’s appearance and ‘personality’ as desired and the system memorises their preferences, adapting its speech and actions. Since the AI is constantly available, this creates a quasi-permanent interactive presence.

Tokyo (Japan), 2018: 35-year-old office worker Akihiko Kondo is shown here gazing at Hatsune Miku, with whom he is symbolically married. Developed as an emblem of the vocal synthesis software programme Vocaloid, Hatsune Miku was promoted like a real human singer, even performing hologram concerts.
Tokyo (Japan), 2018: 35-year-old office worker Akihiko Kondo is shown here gazing at Hatsune Miku, with whom he is symbolically married. Developed as an emblem of the vocal synthesis software programme Vocaloid, Hatsune Miku was promoted like a real human singer, even performing hologram concerts.

This quest reflects a desire for non-judgmental affection and a stable, secure relationship, which is often lacking in real human interactions. However, it is lopsided in this case. The machine doesn’t feel anything – it reacts according to pre-programmed scenarios. It’s important to emphasise that these relationships reflect above all a deepfelt human need, but they can result in isolation or withdrawal into an artificial emotional cocoon.

Could artificial intelligence systems eventually develop a form of consciousness or emotional autonomy?

C. P.: No. At this stage, these are still sophisticated simulations. AI can simulate behavioural continuity, memorise habits and adapt its responses to the context, but it does not have genuine consciousness, a subjectivity of its own. The fundamental distinction is that AI has no feelings or intentions. It carries out probabilistic calculations based on huge databases. Conscious machines are more the stuff of science fiction than current technological reality.

In fact, the inclusion in the conversational agents’ algorithms of elements for observing etiquette, implying past experiences or indicating preferences, or formulations suggesting that the machine has a life of its own – a ‘consciousness’ (even though the concept is not strictly defined, even for humans) – gives rise to sometimes absurd legal debates about ‘the right of an AI system not to respond’ or its supposed suffering. But when AI performs an equivalent task to find out how a protein unfolds, no one worries about its working conditions and purported drudgery! This is a very human phenomenon of mirroring empathy that creates the illusion of a ‘person’.

What about biases in artificial intelligence, especially regarding social and cultural diversity?

C. P.: Bias is an unavoidable reality. Many AI systems have been trained on initially homogeneous datasets (often comprising interactions between adults in the Western world), which penalises their ability to recognise and adapt their responses to minority groups of different ages and cultures. Efforts have been made to diversify these databases, but the modelling of cultural, linguistic and social complexity is an ongoing challenge. These limitations can lead to troublesome misinterpretations or even discrimination if the systems are deployed without close human supervision.

To what extent can AI become a manipulative agent, for example in everyday life or sales situations?

C. P.: This is a very important question. Voice assistants or recommendation systems already use behavioural and emotional data to anticipate needs, sell products or influence decisions, in some cases without the user’s knowledge.

Germany, 2021: A woman sitting at her kitchen table calls a friend using Alexa. She received the voice assistant system as part of a project to combat loneliness among the elderly.
Germany, 2021: A woman sitting at her kitchen table calls a friend using Alexa. She received the voice assistant system as part of a project to combat loneliness among the elderly.

This permanent intrusion can transform an AI system into a comportmental influencer or vector of manipulation. That’s why it is essential to impose strict regulatory frameworks and ensure full transparency so that users retain control over their decisions without being instrumentalised. Some voice applications, like Alexa2, are now speaking up unprompted to propose products.

What are your recommendations for piloting the evolution of AI and avoiding its abuse?

C. P.: The priority is to boost ethical research and the oversight of AI applications before they come into widespread use. Their psychological and social impact must be scientifically measured, especially for vulnerable populations like children or isolated people. The protection of privacy, the control of personal data and technology education are all essential leverage points.

Lastly, we must establish an ongoing dialogue among scientists, industry, legislators and citizens, working together to define the acceptable limits and preserve the wealth and complexity of human relations in a world increasingly reliant on artificial intelligence.

How can culture and cinema, through films like The Residence, contribute to this process?

C. P.: Cinema plays a dual role. It feeds our fears and fantasies about artificial intelligence, but also acts as a laboratory of ideas, a space for exploring the possible consequences of human-machine coexistence. The Residence, like Her3 and other films, delves into our feelings, our limits, our desires in relation to virtual entities. These narratives help raise awareness and encourage critical thinking about what we are prepared to accept or want to reject in this technological revolution.

Beyond its entertainment value, The Residence points out the complexity of emerging relationships with personal AI systems. It questions our fascination and calls for vigilance, possibly based on technological tools while maintaining the ethical perspective needed to ensure a more human-centered future.

For viewing

The Residence, directed by Yann Gozlan, released in September 2025.
 

For further reading

AI needs to align with human values
The enduring mystery of consciousness

Dying with the times
Downloading the human mind

 

Footnotes
  • 1. CNRS research professor at the Institute of Intelligent Systems and Robotics (ISIR – CNRS / Sorbonne Université).
  • 2. Amazon’s voice assistant.
  • 3. A film directed by Spike Jonze (winner of the Oscar for best screenplay in 2014) in which a man, a public writer working on the Internet, develops an “intimate” relationship with an artificial intelligence system.

Author

Lydia Ben Ytzhak

Lydia Ben Ytzhak is an independent scientific journalist. Among other assignments, she produces documentaries, scientific columns, and interviews for France Culture, a French radio station.