AI will only be successful if it inspires confidence in consumers and citizens. Making decisions explicable and transparent is a challenge that experts in psychology and pedagogy are trying to take up.
On February 5, the court in The Hague prohibited the Dutch government from using SyRI, a social security fraud detection software. The Batavian administration had refused to reveal the computer code of this automatic system which combined various data (tax declarations, amounts of aid received, etc.) and targeted the poorest populations.
__“We hope that this precedent will encourage states to publish the code of the algorithms they use,"__said Amos Toh, who followed this trial for HRW (Human Rights Watch), an American NGO where he is a researcher, in charge of And human rights.This would allow third-party organizations to verify the proper functioning of these programs and citizens to understand how the decisions concerning them were motivated. “
“Black boxes”
Automatic decision-making systems, whether based on logic models (expert systems, etc.) or statistics (machine learning, deep learning, etc.), are starting to have an impact on the lives of millions of citizens, consumers and of employees: tracking down tax fraud, recruitment, facial recognition, selection for higher education, monitoring the productivity of workers or handlers, granting of bank loans, chatbots, etc.
With each advance, experts remind that these software systems are only “black boxes”. “We spend too much energy and time trying to understand how certain algorithms work and make their decisions,” sighs Marie David, graduate of l’X and Ensae (National School of Statistics and Economic Administration ), who led data and AI teams in banking and insurance, and co-wrote “Artificial intelligence, the new barbarism” (Editions du Rocher).
The challenges of this algorithmic transparency are legal, economic and societal. Legal? “European laws, like the GDPR, and French impose a form of transparency on the logic underlying an automated processing of personal data “, recalls Grégory Flandin, Director of Artificial Intelligence for Critical Systems at the IRT Saint-Exupéry, a Toulouse technological research institute. “The GDPR has a strong impact,” confirms Jean-Philippe Desbiolles, world vice president of AI and data for the financial sector, at IBM. “Internal audits devoted to the absence of bias, transparency and explicability of AI have been increasing sharply in recent months for our customers. The question of trust in these systems is essential for its adoption.” All the AI platform publishers also offer their users tools detailing - a little (see below) - how their software works. And large audit and consultancy firms hire specialists. “For the past seven months, I have been responsible for the ethics of AI for the EY firm," explains Ansgar Koene, also a computer researcher at the University of Nottingham (England) and author, in April 2019, of
Ethical issues
Economic issues? “G_eorge Akerlof, the 2001 Nobel Prize in economics, has shown that if a market becomes opaque for some of its players, it risks collapsing_” recalls Patrick Waelbroeck, professor of economics at Telecom ParisTech, co-founder of the Values chair and personal information policies. “__This is what could happen to AI if we do not manage to inject more transparency.__”
Societal challenges? “The risk is to see consumers or citizens not only reject decisions they do not understand, but also reject companies or states using this software,” said Anne Bouverot, president of Technicolor and the Abeona foundation (“data science for equity and equality”).” All the ethical issues raised by AI have already been addressed,” said Teemu Roos, who teaches AI and data at the University of Helsinki (Finland). Except, algorithmic transparency and accountability. This is why a special effort must be made to educate on these two aspects. The initiatives are multiplying. They are twofold: trying to train as many citizens as possible about the major challenges of these algorithms; and find solutions to simply explain the AI decision process to a lay person.
Impact on privacy
From the beginning of April, all French people will be able to learn AI, thanks to two Mooc. One, already available in six European languages, comes from Helsinki : “Launched in May 2018, Elements of AI was attended by 362,000 people, including only a third of Finnish," said Megan Schaible, chief operating officer at Reaktor, a Helsinki technology consultancy that designed this Mooc with Teemu Roos. The other course, Objectif IA, is the result of collaboration between a think-tank from the Institut Montaigne, the specialist in online education OpenClassrooms and the Abeona foundation.
“Transparency by Design”, psychology of explanation, impact on privacy… initiatives are multiplying on the best way to explain to the general public the result of software. “We want transparency vis-à-vis candidates to be integrated into our next artificial intelligence platforms from their conception,” said Steve O’Brien, operations manager at Job.com, but without giving more details. an American online recruitment site that is reorganizing around AI. “What is a good explanation? How to assess its usefulness? All these psychological questions are also part of our program”, reveals Matt Turek, head of the Explainable Artificial Intelligence (XAI) project at Darpa, the research agency within the US Department of Defense. Many academics are working on methods that attempt to show the relative weight of this or that parameter in the decision-making of an AI (Quantitative Input Influence) system. Yair Zick, who teaches computer science at the National University of Singapore, is one of them. However, he drew attention to the impact on privacy of these explanatory systems. “The__ more information you give about an algorithm, the more you give other users the opportunity to apply it to other people and thus learn more about them!” __He warns.
Read the full article in LesEchos