Benjamin Jan.


Benjamin Jan. – DR

Lhen algorithms take center stage in the media, light is often shed on the consequences of their use by the private sector, like those who govern social networks. However, among the algorithms impacting our lives, the generalization of decision-making support algorithms in the public sector too often goes unnoticed: the fight against tax fraud, allocation of social benefits, predictive policing, university admissions , medical diagnosis. As long as databases exist, these systems can in theory be used in all fields. Admittedly, public administrations are less attractive than the giants of Silicon Valley, but the political and legal consequences on citizens are not less, quite the contrary.

Intelligent decision support systems

Among the algorithms, a distinction is made between “simple” algorithms and self-learning algorithms (also called “ machine learning “). The role of self-learning algorithms is to produce a statistical model applicable to new situations on the basis of decisions taken previously that they have learned. The result they provide is a prediction based on statistical calculations, not to be confused with certainty.

These algorithms are enjoying growing interest and offer as many opportunities as challenges when used in decision making. If you’ve recently applied for credit to a bank or sent your resume to a large company, chances are you’ve been subjected to a self-learning algorithm that has defined your creditworthiness or suitability for business needs. ‘business.

The public sector is not left out with regard to the deployment of these algorithms. And nothing seems to be able to slow down this growth … On the contrary, the ongoing digitization of administrations and consequently the increase in administrative data combined with technological advances in the machine learning encourage the use of such systems. Added to this, the blessing of the European Commission which considers it “essential” that the public sector adopts artificial intelligence (1), and here is a fertile ground for the deployment of algorithms.

Promises of efficiency

Why such an interest ? Although the answer varies depending on each administrative area, in general, the use of self-learning algorithms promises public servants to derive value from large databases. It would make it possible to maximize the efficiency of public services while reducing the costs associated with tasks incumbent on the administration. In this sense, for example, the French tax authorities can boast of an increase in recoveries of 130% in terms of tax fraud in the space of one year (2). Better yet, some argue that the use of algorithms can reduce public officials’ errors in decision-making processes. Humans are fallible after all, and not every civil servant knows the latest regulatory changes inside and out.

Despite these promises, doubts persist about the predictions provided by self-learning algorithms. What role do they play in the digital transformation of our administrations? Knowing that the effectiveness of machine learning increases with the number of data that we provide to it, are we not whetting the appetite of administrations towards our data? This concern is also shared by the French data protection authority, which already underlines the change of scale in the use of personal data by the administration (3). In addition, what happens to the mistakes, from the past, that the algorithm has learned and reproduces to aid in decision making? Who ensures that the private sector designers of these algorithms take into account our rights while avoiding themselves inserting their cognitive biases? All these questions converge on a major issue: the preservation of our rights and freedoms.

Inadequate safeguards

That the administration evades its obligation to provide fair and equitable treatment, but also that respect for the principle of non-discrimination of its citizens is violated, are the main concerns generated by the use of the machine learning.

Among the legal instruments protecting the use of our personal data, those contained in the General Data Protection Regulation (GDPR) come first. The cardinal principle of transparency resulting from this European legislation aims for us to control our data and therefore to exercise our rights. However, this transparency seems to be shaken by self-learning algorithms when they aim to help decision-making.

First, one of the key safeguards of the GDPR is the right not to be the subject of a decision taken exclusively by a machine. The exceptions to this right are marked out by additional guarantees, but these do not apply in the event that an official acts actively and has the last word. The role that humans play in the decision is, however, not very obvious. Some scientists have pointed out the cognitive biases people have when faced with the technological automation of a task. The aura of infallibility emanating from technology would provoke complacency towards it (4). Decision-making by the administration can therefore in certain cases be de facto automatic while avoiding the guarantees relating to our rights.

Second, even though certain safeguards stemming from the GDPR would apply in terms of transparency, these legal instruments seem inadequate given the characteristics of the machine learning. Criticized for its opacity, the very functioning of self-learning algorithms makes it difficult to detect errors that they could have learned on the basis of decisions made in the past. This problem in itself is not sufficient to reject the application of the machine learning within administrations; after all, public servants are also prone to errors. But, unlike the mistake of an official, the mistakes of the machine learning are capable of being reproduced infinitely. Is such a scenario acceptable in areas such as the granting of social allowances?

Behind the importance of the transparency required in the processing of our data hides an even more important issue: the responsibility of the public authority. The rule of law imposes democratic mechanisms which submit public power to the law. However, without transparency, administrative decisions become difficult for a citizen to challenge. The complexity of machine learning makes the challenge even more complicated because its operation is difficult to understand. To avoid possible abuses on the part of the administrations, it is necessary on the one hand, to make officials and citizens aware of the risks inherent in the predictions resulting from the machine learning, and on the other hand, to establish specific guarantees when complex algorithms help in decision-making.

Conquer legitimacy for their use

Efficiency within public administrations is desirable and to turn away from the analytical capacities of algorithms would be to miss out on important opportunities. However, the legitimacy of their use in our liberal democracies can only exist through obligations of transparency and democratic control adequate to algorithms. The speed of deployment of the latter, faster than the creation of adequate safeguards, unfortunately leaves room for great legal uncertainty.

To gain legitimacy and gain the trust of citizens, governance by algorithms must be provided with effective guarantees so that we are able to contest an administrative decision knowingly and so that we can avoid being unfairly discriminated against. If this is not the case, many decision-support algorithms risk meeting the same fate as that which detected social fraud in the Netherlands, where the courts have ruled against human rights. the algorithmic system for its violation of the right to privacy and… its lack of transparency.

All * Carta Academica columns are available free of charge on our site.

(1) European Commission, White Paper. Artificial intelligence: a European approach based on excellence and trust, 2020.

(2) G. Rozières, “How artificial intelligence enabled the tax authorities to recover 640 million euros”, Huffingtonpost, October 23, 2019.

(3) Publication of the CNIL opinion on the experiment enabling data collection on online platforms (2019).

(4) J. Zerilli et al., « Algorithmic Decision-Making and the Control Problem », Minds and Machines, 2019.



LEAVE A REPLY

Please enter your comment!
Please enter your name here