Clearview AI
Clearview AI
Report incident 🔥 | Improve page 💁 | Access database 🔢
Clearview AI is a US-based company that offers a facial recognition platform to law enforcement agencies, allowing them to search for individuals in real-time from a vast database of images scraped from the internet, including social media platforms.
The company says its database contains an "astounding" 30 billion images, making it one of the largest facial recognition databases in the world.
Founded in 2017 by Hoan Ton-That and Richard Schwartz, Clearview AI's services have reputedly been used by over 2,400 law enforcement agencies worldwide, including the FBI, the Department of Homeland Security, and local police departments.
Facial recognition system
A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces.
Source: Wikipedia 🔗
Website: Clearview AI 🔗
Released: 2017
Developer: Clearview AI
Purpose: Identify individuals; Strengthen law enforcement
Type: Facial recognition
Technique: Computer vision; Machine learning
Clearview AI's technology is often used in secret, without transparency or accountability, making it difficult to track its use and impact. Here are some of the tool's transparency and accountability limitations:
Policies. Clearview AI's policies and procedures for collecting, storing, and using facial recognition data are not clearly disclosed.
Data sources. Clearview AI does not disclose the sources of its facial recognition data, making it difficult to understand how the data is collected and used.
Data sharing. Clearview AI does not disclose with whom it shares its facial recognition data, making it difficult to understand how the data is used and by whom.
Data storage and security. Clearview AI does not disclose how it stores and secures its facial recognition data, making it difficult to understand how the data is protected.
Algorithmic decision-making. Clearview AI does not provide transparency on how its facial recognition algorithm works, making it difficult to understand how decisions are made.
Independent oversight. Clearview AI does not have an independent oversight body to review its facial recognition practices and ensure compliance with laws and regulations.
Law enforcement use. Clearview AI does not disclose how its facial recognition technology is used by law enforcement agencies, making it difficult to understand how the technology is being used in practice.
Misuse. Clearview AI does not have clear policies or procedures in place to address misuse of its facial recognition technology, making it difficult to hold the company accountable for any harm caused.
Complaints and appeals. Clearview AI does not have a clear complaint mechanism for individuals to report concerns or errors related to its facial recognition technology.
Error rates. Clearview AI does not disclose its error rates for facial recognition, making it difficult to understand the accuracy of its technology.
Bias and discrimination. Clearview AI does not disclose how it addresses bias and discrimination in its facial recognition technology, making it difficult to understand how the technology may be unfairly impacting certain groups.
Clearview AI is a seen to pose multiple risks and harms, including:
Privacy and surveillance. Clearview AI scraped people's images from social media and other online platforms without consent, violating privacy and enabling the mass surveillance of individuals.
Civil liberties. The use of Clearview AI's technology may potentially undermine freedom of speech, the right to assemble and other civil liberties.
Bias, discrimination and misidentification. Facial recognition technology has been shown to have a disproportionate impact on vulnerable people, including those of colour, immigrants, and low-income communities.
Security. Clearview AI's database has been shown to be vulnerable to data breaches, thereby compromising sensitive information about individuals and its customers.
Erosion of trust. The use of Clearview AI's technology can erode trust between law enforcement and the communities they serve, particularly in communities that have historically been subject to discriminatory policing practices.
August 2024. Dutch regulator fines Clearview AI for privacy violations
May 2023. Austria's privacy watchdog rules that Clearview AI violated the EU's GDPR on multiple counts
November 2022. Randal Reid facial recognition wrongful arrest, jailing
October 2022. French privacy watchdog fines Clearview AI for violating privacy
July 2022. Greece's privacy watchdog fines Clearview AI EUR 10 million
May 2022. The UK ICO fines Clearview AI GBP 7.5 million
March 2022. Ukraine decision to use Clearview AI facial recognition draws concerns
March 2022. Italy's privacy watchdog fines Clearview AI EUR 20 million
June 2021. RCMP violates Canadians' privacy using Clearview AI facial recognition
May 2021. US Postal Inspection Service runs covert protestor monitoring programme
February 2021. Swedish police illegally used Clearview AI
December 2020. NiJeer Parks facial recognition wrongful arrest
August 2020. NYPD uses facial recognition to identify BLM activists
August 2020. Macy's accused of illegally using Clearview AI to monitor customers
May 2020. ACLU accuses Clearview AI of violating Illinois' citizens' privacy
March 2020. Clearview AI tests live facial recognition cameras
January 2020. NYT: Clearview AI scrapes people's images without consent
Notable lawsuits against and involving Clearview AI include:
In re: Clearview AI, Inc. Consumer Privacy Litigation
Regulatory actions against and involving Clearview AI:
Australia. The Office of the Australian Information Commissioner (OAIC) launched an investigation into Clearview AI's facial recognition technology and its compliance with the Australian Privacy Act.
Austria. Austria's Data Protection Authority ruled that Clearview AI violated the EU's GDPR on multiple counts.
Canada. The Canadian Office of the Privacy Commissioner launched an investigation into Clearview AI's facial recognition technology and its compliance with Canadian privacy laws. Clearview exited the market in July 2020.
France. The Commission Nationale Informatique & Libertés (CNIL) launched an investigation into Clearview AI's facial recognition technology and its compliance with the French Data Protection Act. The CNIL fined Clearview AI EUR 20 million.
Germany. Hamburg's Data Protection Authority found that Clearview AI unlawfully processed an individual's data by deriving biometric information from their facial image, and ordered the deletion of the specific biometric data.
Greece. The Greek Data Protection Authority (HDPA) launched an investigation into Clearview AI's facial recognition technology and its compliance with the Greek Data Protection Act. The HDPA fined Clearview AI EUR 10 million.
Italy. The Italian Data Protection Authority (Garante) launched an investigation into Clearview AI's facial recognition technology and its compliance with the Italian Data Protection Code. The Garante fined Clearview AI EUR 20 million.
Netherlands. The Dutch Data Protection Authority (DPA) fined Clearview AI EUR 30.5 million for multiple violations of the General Data Protection Regulation (GDPR).
UK. The UK Information Commissioner's Office (ICO) launched an investigation into Clearview AI's facial recognition technology and its compliance with the UK's Data Protection Act. The ICO fined Clearview AI GBP 7.5 million in May 2022.
Sweden. The Swedish Authority for Privacy Protection found that the Swedish Police Authority processed personal data in breach of the Swedish Criminal Data Act when using Clearview AI to identify individuals.
Vermont. The Vermont Attorney General's office sued (pdf) Clearview AI for violating consumer protection law. Vermont had its motion for summary judgement denied.
Page info
Type: System
Published: August 2024
Last updated: October 2024