You are here
Optimisation and ethics

What we refer to as “optimisation” is actually a mathematical decision support tool. Optimisation is therefore central to algorithms that recommend solutions in more or less automated fashion, for example when asking a GPS what the best route is. While the question may seem simple, the answer certainly isn’t. It depends on what the person enquiring is looking for, in addition to a number of criteria that are more or less obvious.
“Looking for ‘the best route’ with a GPS entails choosing between the fastest, shortest, or least expensive one, etc. The same is true for describing any problem using a mathematical approach: one must select the dimension(s) to measure and compare in order to opt for the preferred solution,” explains Odile Bellenguez, a professor at the Nantes-based IMT Atlantique, and a member of the LS2N laboratory of digital sciences of Nantes (both in northwestern France)1. “Developing optimisation tools involves automating the recommendations that will later be produced in many different situations. It is therefore important to determine how these recommendations will be provided, to train and inform on their limits, and to allow the person receiving them to question their relevance and meaning in their context.”
The crucial issue of the question
Optimising depends on the context. The ethical dimension – asking oneself whether a particular solution is indeed the best for everyone – is therefore central. “Looking for the ‘best route’ entails choosing what we consider to be legitimate and fair, both for ourselves and everyone else, over the long term. For example, the algorithm can prevent us from driving near schools and hospitals, even though this would help save time, in the name of higher interests and ethical principles.”
Providing models, methods, and tools to improve decision-making has an impact beyond the specific question, and can involve other stakeholders or raise overlooked issues before they are included in applications. This research is therefore crucially dependent on how the question is formulated. However, even when decision-making challenges are clearly identified, surprises cannot be ruled out.
“A well-known example of optimisation with morally harmful consequences is that involving a stapler company whose prices were much higher in disadvantaged neighbourhoods than in wealthier ones. This result came as a surprise to the firm, which used an algorithm to set its tariffs. As it turned out, prices were correlated with competing products present within the same stores. As this competition was non-existent in less wealthy areas, the algorithm automatically set the maximum price,” explains Alexis Tsoukias, a CNRS research professor at the LAMSADE laboratory for the analysis and modelling of decision-support systems2, which has long focused on decision support in the public sector. “This is a typical case of involuntary bias. The algorithm responds to parameters that did not account for unexpected and ethically questionable consequences.”
Simultaneously “beneficial and harmful” effects
This leads to a paradox. Some optimisation cases result in “effects that are beneficial and harmful at the same time. For example, optimising how work is organised can make sense for relieving teams of unnecessary and time-consuming tasks, but can also wipe out time that served to breathe, coordinate, and share knowledge among professionals”, Bellenguez observes.
Other involuntary consequences emerge when optimisation is used outside its original context. For instance, transposing optimised routes for transporting merchandise onto individual journeys has led to congestion in certain cities.
“The ethical notion also depends on appreciating situations and expectations, a subtlety that no algorithm can fully grasp. What relates to ethics is difficult to formalise and translate into the decision-making model.”
What is more, the legal definition of ethics differs depending on the country; the notion of discrimination against minorities can apply at a community level, which is the case in the United States but not in France.
Optimising: a political question
“It is important to note that in most situations, the problem is not just technical, but purely political,” Tsoukias asserts. “No tool is neutral. Imagine having to distribute new students depending on their preferred course of studies, and the number of places available in each of these courses. Favouring a specific selection criterion to determine the result produces a bias that can be referred to as political. This decision does not depend on the machine, but rather on the person programming it.”

“The algorithmic contribution is subject to what we collectively and politically choose to do with it,” Bellenguez confirms. “For example, Parcoursup, the French national platform for admissions to higher education training, was configured to maintain a certain rate of scholarship students in training programmes, which is often well received, but it is also based on other aspects that are not unanimously accepted, such academic field or location. The results are criticised not just for their algorithmic component, but also for what drives the algorithm. Yet this is not dependent on the tool’s mathematical properties, but on what was fed into it.”
Ethical optimisation has a cost
There are also particular cases or exceptional circumstances where the tool designed to automate recommendations cannot solve the problem. Abuses and errors can occur in borderline situations. As an example, an administrative service optimised to process permanent employees’ records in order to save time and money will prove problematic for those in a precarious situation. Some modelling choices are not controllable, unless the decision is made to devote time and money to exceptional cases, which may reduce overall optimisation, but will prove more ethical.

Optimising a process for one particular issue sometimes proceeds to the detriment of others, such as nature. Society should arbitrate between major challenges relating to the environment, energy sobriety, or social justice, which are sometimes complex to model. For example, the price of air transportation has often been optimised with a view to minimising costs for users, but this has had a rebound effect with travellers increasingly privileging this mode of transport to the detriment of more environmentally-friendly others. The algorithm was not designed to ask itself this question.
Indeed, the decision-making process is not just about identifying a problem but also devising an answer. We need to know what we want, and even more importantly, what we do not want. ♦
Further reading (in French):
Intelligent networks for better distribution of renewable energy
Algorithms: artificial injustice?
Jean-Bernard Lasserre in search of the optimum
Ethics and markets: compatible or irreconcilable?
Explore more
Author
Lydia Ben Ytzhak is an independent scientific journalist. Among other assignments, she produces documentaries, scientific columns, and interviews for France Culture, a French radio station.