MARY REICHARD, HOST: It’s Tuesday, the 18th of February, 2020. You’re listening to The World and Everything in It and we are so glad you are. Good morning, I’m Mary Reichard.
NICK EICHER, HOST: And I’m Nick Eicher. First up: online privacy. Or rather, lack of it.
We’ve covered aspects of the problem before.
But here’s a new one. When you post photos and videos of yourself—or your kids—online, you don’t expect them to end up in the hands of a company trying to make money off of them. Especially if that company is selling a massive surveillance database. But that’s what’s happening to a lot of people who post images to social media.
REICHARD: Clearview AI is a startup company that makes facial recognition software. It built its database by taking billions of images off the Internet. The technical term for that is scraping.
Hundreds of police departments across the country use it to help identify suspects.
This month, Google, YouTube, Venmo, and Facebook demanded the company stop scraping off their platforms. They say Clearview AI’s practices violate users’ privacy.
Joining us now to talk about the controversy is Jason Thacker. He’s an associate research fellow at the Ethics and Religious Liberty Commission of the Southern Baptist Convention.
Good morning, Jason!
JASON THACKER, GUEST: Good morning. Thank you for having me.
REICHARD: I’d like to start with this surveillance technology itself. How does it work?
THACKER: Yeah, this artificial intelligence system, it’s based on AI and what it does is it builds a model or kind of a map of someone’s face. So, you think of like Apple FaceID on your phone that you use to open up your iPhone. That’s creating a facial map and then anytime you go to use the iPhone or to unlock it, it unlocks based on your face and so that it can only be opened by you. That’s a very similar type of technology that’s being used here by Clearview AI, just for very different purposes.
REICHARD: Ok. Let’s compare that to what’s going on in China. Its surveillance system allows the government to locate people within a matter of minutes using facial recognition software connected to a network of security cameras. You know, it’s a scary prospect. Do you think we’re headed in that direction in the United States?
THACKER: Well, hopefully not. The New York Times broke the story of Clearview AI back in January after one of their reporters did kind of a deep-dive investigation of the company. And it’s just—there’s a lot of mystery about what Clearview’s purpose is and what future plans—is this going to be limited to police and government uses? Is it going to go into private hands?
There are a lot of benefits to the technology. I mean, even these police departments have said that they’ve been able to crack a lot of cold cases—cases that sat dormant for years with no leads. But you do see a lot of abuses, especially in more authoritarian states like China, where this technology is being used by the government not to protect its people, but really to surveil and control them. And I think based on the way that our democracy is set up and kind of the privacy and human rights concerns that we have here in America that won’t happen, but obviously in the wrong hands this technology can be misused and abused in really nefarious ways.
REICHARD: Always upsides and always downsides to consider. Technology companies don’t like it that their platforms are being dragged into this. But Clearview A-I’s founder says his company has a First Amendment right to access publicly available information. Is he right?
THACKER: And that’s the really confusing part. He may be right. There were a couple court cases over the last few years that seemed to give First Amendment access to these photos because they are publicly available online. But, it seems to be ambiguous enough that people are very concerned about is this legal? But, also, they’ve already built the system and so they’ve been able to train this AI system or this algorithm to do this on millions and millions of photos. So it is going to be very difficult to walk that back. But you do see even in the last few weeks, a lot of our congressmen and senators kind of calling for some type of regulation. There’s lots of conversation about this to say maybe we should have a law on the books that forbids this type of use.
REICHARD: Well, as you allude to, some tech giants advocate for the government to regulate facial recognition software. Facebook and Microsoft among them. Is that likely? And if so, what are the potential drawbacks?
THACKER: Yeah. There are a lot of talks happening on Capitol Hill and even in state governments. There’s a lot of places like New Hampshire and Oregon that are contemplating these types of bans. Cities like San Francisco and Oakland, California already have city-wide bans on facial recognition systems for their police departments. And so there is some movement on that. Obviously with any type of technology there are going to be benefits and there are going to be abuses of it. And so it’s really how do we strike that balance between security and also dignity and privacy for our people. And that’s the really tough question that we’re hoping lawmakers along with various interest groups can come to a real level-headed agreement on what does it look like to have privacy in this digital age.
REICHARD: Jason Thacker is with the Ethics and Religious Liberty Commission of the Southern Baptist Convention. He’s also written a book about artificial intelligence that comes out next month. It’s titled, The Age of AI. So congratulations on that and thanks for joining us today, Jason!
THACKER: Yeah, thank you for having me, Mary.