By Mathieu Rosemain
PARIS (Reuters) – France’s data privacy watchdog CNIL has ordered Clearview AI, a facial recognition company that has collected 10 billion images worldwide, to stop amassing and using data from people based in the country.
In a formal demand disclosed on Thursday, the CNIL stressed that Clearview’s collection of publicly-available facial images on social media and the Internet had no legal basis and breached European Union rules on data privacy.
The regulator said the software company, which is used as a search engine for faces to help law enforcement and intelligence agencies in their investigations, failed to ask for the prior consent of those whose images it collected online.
“These biometric data are particularly sensitive, notably because they are linked to our physical identity (what we are) and allow us to be identified in a unique way,” the authority said in a statement.
It added that the New York-based firm failed to give those concerned proper access to their data, notably by limiting access to twice a year, without justification, and by limiting this right to data racked up during the 12 months before any request.
Clearview did not immediately reply to a request for comment.
EU law provides for citizens to seek the removal of their personal data from a privately-owned database. The CNIL said Clearview had two months to abide by its demands or it could face a sanction.
The decision follows several complaints, among them one by advocacy group Privacy International. It follows a similar order by its Australian peer, which told Clearview to stop collecting images from websites and destroy data collected in the country.
The U.K. Information Commissioner’s Office, which worked with the Australians on the Clearview investigation, also said last month it intended to fine Clearview 17 million pounds ($22.59 million) for alleged breaches of data protection law.
($1=0.7526 pounds)
(Reporting by Mathieu Rosemain)