NICK EICHER, HOST: Today is Wednesday, July 24th. Thank you for turning to WORLD Radio to help start your day. Good morning. I’m Nick Eicher.
MARY REICHARD, HOST: And I’m Mary Reichard. Coming next on The World and Everything in It: the ethics of AI, artificial intelligence.
Robert Marks is an engineering professor at Baylor University. He’s also director of the Discovery Institute’s Walter Bradley Center on Natural and Artificial Intelligence.
EICHER: WORLD Radio’s J.C. Derrick recently spoke with the professor about ethics and AI.
DERRICK: Should Christians be more fearful or excited about AI?
MARKS: I think that AI—like any technology—is neither good nor bad. It’s how it’s used. Short answer.
DERRICK: OK. Well, can you unpack that a little bit? Meaning, it can be used for good or for evil, but, I mean, do you see one of those as more common than the other right now?
MARKS: Actually, I see a lot of good. I think that we’re actually numbed by familiarity in the adoption in AI.
MARKS: I think we’re numbed by familiarity in the adoption of AI. Certainly Uber and Lyft and Amazon and all of these other applications of AI are so ubiquitous that they’ve become very normal. But they are certainly life-changing. And I think that that is going to continue and we’re going to see lots of positive impact in terms of the application of AI.
But I also see it being used in the monitoring and the loss of privacy with people and I worry about that. I worry about the loss of my privacy, which I surrendered a long time ago to Amazon.com when I signed their agreement. And Google also, probably.
But I’m really afraid about the use of this, especially the way I understand the Chinese are going to be using it for controlling and monitoring their population.
DERRICK: Well, my very next question was about China. I mean, would you say they’re the most—that they’ve gone the farthest down that road at this point?
MARKS: Well, I think that they’re the only ones that can, I hope. In the United States, we’re still backed by—what is it?—the Fourth Amendment that says we have the right to privacy.
MARKS: So, I hope that indeed it stays that way and we have the right of privacy, which the Chinese unfortunately don’t have.
DERRICK: Right, right. Well, what do you make of the claims that AI can make music and art?
MARKS: I believe that AI only can think inside the box. Let me give you an example. Many times we talk about and we hear about AI creating music. But here’s a typical scenario: the AI is fed a number of compositions by, say, Johann Sebastian Bach, and then it is churned around in the artificial intelligence which is asked to write a song and guess what it writes. It writes something that sounds like Bach. It doesn’t sound like anything that Richard Wagner or Schoenberg or Stravinsky would write.
That is originality. That is creativity. And the AI only has the ability to interpolate inside the box, inside of the training data and is unable to think outside of that. And I think that creativity normally requires abandonment of the status quo in order to reach out into new, unexplored horizons. And we don’t see that yet in artificial intelligence.
DERRICK: Well, one thing that’s also been discussed is the possibility that AI could eventually outsmart humans—advance beyond us. Do you see that as any sort of possibility?
MARKS: Well, I’m humbled by my calculator. It’s certainly has outpaced me in the terms of addition. But in terms of becoming smarter and computer programs writing better and more powerful computer programs, which in turn write better and more powerful computer programs, no.
Computer programs, in general, do not have the capability of being creative. And in order to write a better computer program, you have to display creativity. And that creativity can only exist if the programmer places that creativity directly within the computer program, which means that the computer program itself is not creative. It’s actually the computer programmer, which is supplying that creativity.
So that’s where any creativity comes from—any smarter program. Somehow I don’t believe that it will happen.
I also know that people who looked at writing smarter programs using genetic algorithms and evolutionary programming have abandoned their search in large because they’ve tried a bunch of different things and nothing seems to work. They can’t get smarter programs that way.
But I also know people that are very excited about trying other ways. I don’t think they’re going to work, though.
DERRICK: Well, along those same lines, there have been these claims of AI somehow making humans immortal. So, can you talk about where those claims come from and what you make of them?
MARKS: Well, certainly immortality from uploading ourselves into a digital computer comes from a materialistic Point of view, which says that everything we are is boiled down to materialism.
However, there are things which the human can do which cannot be captured by a computer algorithm. It cannot be captured by code.
Saying that means that uploading would only allow us to look at the algorithmic part of our being. And certainly we have algorithmic capabilities. We can add a column of numbers, for example. That’s algorithmic.
But I don’t think in terms of love, compassion, creativity, qualia, the other things that I mentioned, those will not be uploadable, because those are not programmable. Those are not algorithmic.
DERRICK: Sure. OK. Earlier this year, the Ethics and Religious Liberty Commission released an Evangelical Statement on Artificial Intelligence. So did you see that as a helpful marker for Christians who want to think through this issue more deeply?
MARKS: I did. I must admit, I only perused the article, but my colleague Jay Richards wrote a very nice synopsis of that statement. And it was backed with science, and it was backed with good engineering, so it seemed to be very, very solid. It was also backed by good Christian theology. So, yes, I think it is a very good statement.
DERRICK: And I assume also just the fact that it’s out there speaks to the fact that we can’t ignore this. It’s not going away.
MARKS: Well, no, it’s not going away. But neither is electricity. [Laughs] It’s something—it’s a new technology and as we go forward with the technology, it’s going to be like any new technology adaptation. There’s going to be dangers and there’s going to be good applications. We use electricity and people still burn down their houses because of frayed insulation and people still get electrocuted, but we still use it. It’s been harnessed primarily for good. And hopefully that’s what will happen with AI also.