Imagine you’re at a bar and you see a person you find attractive.
You sneakily take a photo of them, and use that photo in an app that pulls up every public photo of that person available online. Links to each photo are also provided, meaning you can find out this person’s name, workplace, hometown, friends, and more, without even talking to them. An app called Clearview AI has made the potential for this situation a reality.
Recently, New York Times reporter Kashmir Hill investigated the tiny start-up that’s taking revolutionary steps in facial recognition technology. Clearview AI was developed by Hoan Ton-That, a San Francisco techie by way of Australia, who marketed the app as a tool for law enforcement to hunt down their victims. Clearview’s database contains over three billion images scraped from millions of websites; the premise is, when you take a photo of a person, you can upload it and see public photos of that person and access links to where those photos are from.
Facial Crowd Recognition Technology Getty Images
Though this sounds like a remarkable tool for law enforcement, Clearview poses severe threats to privacy if placed in the wrong hands. As Hill described in her appearance on Times podcast The Daily this week, someone with malicious intent could theoretically take a photo of a stranger, upload it to Clearview, and uncover personal information like that person’s name, where they work, where they live, and who their family members are. In short: the concept is so risky that companies who were able to do the same thing first, like Google, refused to.
Still, Clearview claims that over 600 law enforcement agencies have been using the app, although they’ve kept their list of customers private. Clearview’s investors have cited the app’s crime-solving capabilities as a means to back it; Clearview has already helped track down suspects on numerous accounts. But, as Hill’s reports found, the app isn’t always perfect and might not be fully unbiased; “After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches,” Hill wrote. “When asked about this, Mr. Ton-That laughed and called it a ‘software bug.'” Later, when Ton-That ran another photo of Hill through the app, it pulled up a decade’s worth of photos—many of which Hill didn’t even realize were public.
“Our belief is that this is the best use of the technology,” Ton-That told Hill. But is Clearview’s usefulness in law enforcement worth leaving our privacy behind every time we leave the house?
- Artificial Intelligence Explained | Overview | Accenture ›
- Logjam, Part 1: Why the Internet is Broken Again (an Explainer … ›
- Here’s The Unofficial Silicon Valley Explainer On Artificial Intellige ›
- Twitter demands AI company stops ‘collecting faces’ – BBC News ›
- New Jersey law enforcement blocked from using facial recognition … ›
- ITIF Technology Explainer: What Is Artificial Intelligence? | ITIF ›
- Clearview AI | Biometric Update ›
- Twitter demands AI company stops ‘collecting faces’ – BBC News ›
- Biometric privacy suits leveled against IBM and Clearview AI in Illinois ›