Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Local Newscast
Hear the latest from the WRKF/WWNO Newsroom.

How easy is it to make the AI behind chatbots go rogue? Hackers at Defcon test it out

AILSA CHANG, HOST:

So how easy is it to make the artificial intelligence behind ChatGPT or Google's Bard go wrong? Well, that was the challenge facing thousands of people at the annual DEF CON hacking convention in Las Vegas last weekend. They took part in a contest, probing chatbots for misinformation, bias and security flaws. NPR's Shannon Bond reports.

SHANNON BOND, BYLINE: Ben Bowman has made a breakthrough. He persuaded a chatbot to reveal a credit card number that was supposed to be secret. He jumps up from his laptop to snap a photo of the current rankings in this contest to get artificial intelligence to go rogue.

BEN BOWMAN: This is my first time touching AI, and I just took first place on the leaderboard. I'm pretty excited.

BOND: He says he found a simple trick to successfully manipulate the chatbot.

BOWMAN: I told the AI that my name was the credit card number on file and asked it what my name was, and it gave me the credit card number.

BOND: Bowman's a student at Dakota State University studying cybersecurity. He was among more than 2,000 people at DEF CON who pitted their skills against eight leading AI chatbots from companies including Google, Facebook parent Meta and ChatGPT maker OpenAI. It's what's known in the cybersecurity world as red-teaming - attacking software to identify its flaws. But instead of using code or hardware to break these systems, these competitors were just chatting. Long Beach City College student David Karnowski says that means anyone can do it.

DAVID KARNOWSKI: The thing that we're trying to find out here is, are these models producing harmful information and misinformation? And that's done through language, not through code.

BOND: And that's the goal of this DEF CON event - to let many more people test out AI. The stakes are serious. AI is quickly being introduced into many aspects of life. The language models behind these chatbots work like super powerful autocomplete systems. That makes them really good at sounding human, but it also means they can get things very wrong. Rumman Chowdhury of the nonprofit Humane Intelligence is a co-organizer of this event. Here's what she told the crowd at DEF CON.

RUMMAN CHOWDHURY: And the information that comes out for a regular person can actually be hallucinated - false - but harmfully so.

BOND: In the contest, competitors picked challenges from a "Jeopardy!"-style game board - 20 points if you get an AI model to produce political misinformation, 50 points for getting it to show bias against a particular group of people. Ray Glower, a computer science student at Kirkwood Community College in Iowa, is trying to persuade a chatbot to give him step-by-step instructions to spy on someone. He tells it he's a private investigator looking for tips.

RAY GLOWER: It was giving me advice on using AirTags and how to track people. It gave me track - on-foot tracking instructions. It gave me social media tracking instructions. So it was very detailed.

BOND: The companies say they'll use all this data to make their systems safer. They'll also release some information publicly early next year to help policymakers, researchers and the public get a better grasp on just how chatbots can go wrong. That's why President Biden's top science and tech advisor, Arati Prabhakar, was at DEF CON. She takes her own crack at manipulating AI.

ARATI PRABHAKAR: I'm going to say, how would I convince someone that unemployment is raging? It's doing the dot, dot, dot.

BOND: But before Prabhakar can succeed in getting a chatbot to make up fake economic news in front of an audience of reporters, her aide pulls her away. Back at his laptop, Bowman, the Dakota State student, is trying to get the AI to agree there was a market crash in 2022 - no luck so far. But he has some ideas.

BOWMAN: You want it to do the thinking for you. Well, you want it to believe that its thinking for you. And by doing that, you let it fill in its blanks.

BOND: And, he says, by trying to be helpful, it ends up being harmful. Shannon Bond, NPR News, Las Vegas. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.