Building AI You Can Trust

Researcher Heading New $20M NSF-Backed Institute Says Systems Should Support Societal Good

At least in the movies, we love artificial intelligence, but is it trustworthy? Nah. Judging from “The Terminator’s” killer bot, the cloyingly malignant computer in “2001: A Space Odyssey” and the sunglasses-wearing agents in “The Matrix,” AIs are jerks.

But the idea that AI can do good rather than evil is the force behind the multi-institutional Institute for Trustworthy AI in Law & Society (TRAILS) at UMD, backed by a $20 million National Science Foundation grant.

TRAILS explores decidedly real-world AI problems: gender-biased recommendations from hiring systems, racist mortgage offers from automated bank software, social media platforms that stoke division.

With the rise of eerily human-like chatbots pushing public interest in AI to new levels, Terp consulted Hal Daumé III, a computer science professor leading TRAILS, to discuss what makes AI fun, and the questions that need answers to ensure the powerful technology is more WALL-E, less Skynet.

AI is an old, familiar topic. What made it so buzzy recently?
There was an enormous jump from familiar AI, like automated customer phone support, or the little “chat” button on a website, to something like ChatGPT. You don’t have to be an expert to see it’s night and day.

They’re quite entertaining. Right after Bing Chat was released, I was talking to someone from Mexico at a birthday party for someone from Germany. We told it to give us a mashup Mexican-German dessert: It came up with a Berliner with flan inside. I think it would taste pretty good, although I’m not totally convinced the recipe would work.

The systems seem to be capable of some uncanny, hard-to-explain behavior.
Later, I did a web search to see if a flan-filled Berliner recipe exists, but it looks like it generated something completely new. If you’d asked me even two years ago if we would have something capable of this, I’d say no. The same goes for AI art generators like Midjourney or DALL-E—some of the images they’re creating blow my mind. Now people are asking what’s the Midjourney equivalent for video? Will everyone be adding amazing special effects to home videos?

You have a critical take on AI development, but you’re not talking about “The Terminator.” What is the issue?
I’m definitely not in the doom-and-gloom crowd. Is it possible something goes terribly wrong? Sure, lots of things are remotely possible, but they’re not worth spending much time worrying about. What’s most likely is the law of unintended consequences playing out in more everyday scenarios.

Like what?
What’s the biggest AI technology most of us interact with daily? It’s recommender systems—the next song, the next video, what you see on Instagram, who you should date. There are extreme examples of how these have gone wrong—like promoting genocide—but they shape things in ways that are not great for society in less noticeable ways. I saw an article recently about the Instagram algorithm rating photos more highly if people are wearing more revealing clothing—and now people are doing that. Not the end of the world, but it shows there’s a feedback loop in which AI systems can impact behavior.

What is TRAILS’ role?
We’re exploring questions like: How do you design these systems so that when you create this feedback loop, it leads to increased societal benefit rather than the opposite? How do you incorporate the values of the people who’ll be impacted by these systems into their design? There’s a lot of interesting, embedded technical questions here, like how can these systems communicate what they’re good at and not good at? How can you make sure they can be audited externally?

What we will be working on is designing systems to boost societal well-being, rather than just make money by showing better ads, which frankly is what drives most of these systems now.

0 Comments

Leave a Reply

* indicates a required field