Paris Olympics' AI surveillance sparks outcry over privacy concerns
A woman is scanned by face recognition technology. (Getty Images Photo)


As the Paris Olympics approach, French authorities are set to deploy AI-powered surveillance systems, previously tested at train stations, concerts and sporting events.

These systems, designed to scan crowds, detect threats, and enhance security measures, will not be fully operational during the games but will be implemented by law enforcement and security agencies after the event and remain in use until March 2025.

However, human rights advocates are concerned about the implications of such technology, fearing that it could normalize invasive surveillance practices.

"The Olympics are a huge opportunity to test this type of surveillance under the guise of security issues, and are paving the way to even more intrusive systems such as facial recognition," Katia Roux, advocacy lead at Amnesty International France, told the Thomson Reuters Foundation.

Train Stations, Taylor Swift

The French government has enlisted four companies in the effort – Videtics, Orange Business, ChapsVision and Wintics.

The security platforms of these companies measure eight key metrics: traffic that goes against the flow, the presence of people in prohibited zones, crowd movement, abandoned packages, the presence or use of weapons, overcrowding, a body on the ground, and fire.

Depeche Mode and Black Eyed Peas concerts, as well as a soccer match between Paris Saint-Germain and Olympique Lyonnais, have been test sites for the software.

More tests were run on crowds traveling through the Nanterre Préfecture and La Défense Grande Arche metro stations to see Taylor Swift, and the 40,000 attendees of the Cannes Film Festival.

Cannes Mayor David Lisnard said the town already had the "densest video protection network in France," with 884 cameras – one for every 84 residents.

Across France, there are about 90,000 video surveillance cameras, monitored by the police and the gendarmerie, according to a 2020 report.

"One overarching concern is that while the majority of these use cases may not seem to involve revealing the identity of, or profiling, individual people, they still require the deployment of a surveillance infrastructure that is always one software update away from being able to do the most invasive kinds of mass surveillance," said Daniel Leufer, a senior policy analyst at digital rights group Access Now.

"Members of the public will have little to no oversight about what types of things these systems are monitoring, what updates are made etc, and so we will get the inevitable chilling effect that comes from this type of public surveillance," he said.

Olympics become AI playground

French lawmakers have attempted to assuage criticism with a ban on facial recognition. Authorities say it is a red line not to be crossed.

Matthias Houllier, the co-founder of Wintics, said the experiment was "strictly limited" to the eight use cases outlined in the law, and that features like crowd movement detection could not be used for other processes like gait detection, whereby a person's unique walk can identify them.

Houllier said it was "absolutely impossible" both for end-users and advanced engineers to use Wintics for facial recognition due to its design.

Representatives from Videtics, Orange Business and ChapsVision did not respond to requests for comment.

Experts have concerns that the way the government is measuring the success of these tests, and the precise way this technology works, has not been made available to the public.

"There is nowhere near the necessary amount of transparency about these technologies. There is a very unfortunate narrative that we cannot permit transparency about such systems, particularly in a law enforcement or public security context, but this is nonsense," Leufer said.

"The use of surveillance technologies like these, especially in law enforcement and public security contexts, holds perhaps the greatest potential for harm, and therefore requires the highest level of public accountability," he said.

Privacy campaigners say that carve outs in legislation would allow deployment of facial recognition by "competent authorities," for purposes including national security and migration.

"This is not a ban. That's actually an authorization for law enforcement agencies. People have this illusion that because it says we are banning the technology – except in this, this and this situation – it's OK, but these situations are the most problematic ones," Roux said.

France's historical use of surveillance has also raised concerns. In November last year, nonprofit Disclose found that law enforcement agencies had covertly used facial recognition software from Israeli company Briefcam since 2015.

French politicians suggested there was still a gap between the promises made by AI surveillance, and its capabilities.

"AI-driven video surveillance will not be optimal at the time of the Olympic Games. But the Olympics will be a great playground to experiment with it," said French Senator Agnes Canayer.

"More internal security forces or private security forces will be needed to compensate for tech's shortcomings," she said.

The Ministry of the Interior, which oversees French law enforcement, did not respond to a request for comment.

In a list of proposals on the future of AI-enabled surveillance, the government's Law Commission recommended that the "experimental basis" of the technology continue and the retention period of images captured by the systems be extended to "test the equipment over all seasons" and during smaller events.

"That's why we decided to campaign and raise awareness right now on facial recognition, even if it's not going to be used during the Olympics," Roux said. "If we wait until it's going to be used, then it's going to be too late."