Facial Recognition Service Faces Lawsuits for Selling Social Media User Information

Photo: Clearview CEO Hoan Ton-That (left) and the company’s logo (right)

By Ben Robinson

Over the past two months, major technology companies Google, Facebook, Twitter, YouTube, LinkedIn, and Venmo sent cease-and-desist letters to facial recognition technology company Clearview AI for violating their company policies after it was exposed that Clearview was scraping the sites for images of users’ faces for a facial recognition app sold to law enforcement and private companies.

Clearview, according to its website, is a “new research tool used by law enforcement agencies to identify perpetrations [sic] and victims of crimes.” The service is a machine learning algorithm that utilizes a database of over 3 billion images ‘scraped’ from the internet. When a user wishes to identify a person, they simply upload an image to the app which is compared against its database. If any matches are found, the app provides the origins of the images. Though the app is used principally by law enforcement, private companies are permitted to use it if Clearview considers them “security professionals.”

Clearview claims it has an accuracy rate of at least 98.6% when a photo is available and that it can identify people even if they are wearing a hat or glasses, however, this result was derived from a test carried out by the company itself under ideal conditions.

Due to the size of Clearview’s massive photo database, which is over 4.5 times the size of the FBI’s, the company claims that it can find a photo match 75% of the time. In 2011, Google claimed that while it had also built similar AI technology, it had withheld it, as they were concerned it could be used in a “very bad way.”

Kashmir Hill, a journalist who wrote an expose last month on Clearview’s practices, said that Clearview began monitoring her shortly after she began to investigate the company. During her investigation, she had requested that a number of police officers run her photo through the app. Afterwards, “[these officers] soon received phone calls from company representatives asking if they were talking to the media.”

To distract from their own invasions of user privacy, multiple social media companies have publicly retaliated against Clearview. On January 26, Twitter sent the first cease-and-desist letter to the company, insisting that they remove all images scraped from Twitter. This was followed up by similar letters from Google, YouTube, LinkedIn, Facebook, and Venmo this month.

Clearview’s CEO, Hoan Ton-That, says he intends to go to court over these charges, claiming that the company legally uses public information as protected by the First Amendment. Similar arguments made by another image scraping company, data analytics firm HiQ, have been successful in the past, prevailing against a lawsuit from the social media company LinkedIn.

Meanwhile, Clearview has attempted to assuage public outrage. In a post on their website, they claim that the “app has built-in safeguards to ensure […] trained professionals only use it for its intended purpose.”

One supposed safeguard is a mechanism that allows supervisors of users to monitor and prevent access to the app. However, anyone with a government email can request access to the service without departmental approval. This allows government officials to use the app as individuals, even if their department has banned its use. The New York Police Department (NYPD) is confirmed to have around 36 ‘rogue’ officers who use the service outside of NYPD supervision.

Clearview’s website claims that it “was designed and independently verified to comply with all federal, state, and local laws.”

However, Clearview is currently facing a lawsuit seeking class action status filed in the Northern District of Illinois East Division, in which it stated that Clearview breaks the state’s ban on selling biometric information. The New Jersey Attorney-General has also sent his own cease-and-desist letter to the company with the state at-large barring law enforcement from using Clearview. Despite some rare municipal laws in places like San Francisco that bar various uses of facial recognition technology, there are no federal laws regarding Clearview’s practices. It is legal throughout most of the country.

At least 600 law enforcement agencies have used Clearview’s service; this number includes agencies that only used the free trial. The actual total is likely much larger as the company will not provide a list of clients. The company also permits “users who are not law enforcement or security professionals [to] use the services if the User obtains express, written consent from an authorized representative of Clearview in advance.”

Even if Clearview loses access to social media image scraping or finds its business model made illegal, demand for effective facial recognition will not simply disappear. Facial recognition is already used in the newest versions of the iPhone, and Apple has demonstrated a willingness to collaborate with law enforcement and intelligence agencies.

NYPD uses a network of over 9,000 security cameras throughout the city, some of which are equipped with license plate readers and facial recognition software, in a partnership with Microsoft named the ‘Domain Awareness System.’ While they claim the system is meant to stop terrorist attacks, it has also been utilized in regular police investigations. Most notably, in August 2019 the Domain Awareness System, along with facial recognition software, was used to arrest a homeless man for three counts of “planting a false bomb” after he left empty rice cookers in public places sparking bomb fears and subsequent delays in the transit system.

Similar technology is in widespread use by the social-fascist government of China. The technology is so ubiquitous that airports and the Beijing Subway System are fully equipped with facial recognition surveillance systems and cell phone users are required to link their SIM cards to facial recognition software. The system is often used to crack down on dissent, particularly in the Xinjiang region of China where facial recognition cameras are deployed to track even the smallest rebellions by the minority Uyghur population.

This new technology does not make society safer; it only serves the ruling class and its enforcers in the repression of the working class. Ultimately, it is the people, and not advanced technology, which is principal in the struggle between the proletariat and the bourgeoisie; this technology will prove no match for the fury of the masses and their infinite creativity.