Experts are warning that artificial intelligence is developing far more rapidly than regulators can keep up with. Is there any chance of picking up that slack?
Who is he? Paul Scharre is an author and the vice president at the Center for a New American Security. His work focuses on artificial intelligence and how it intersects with power.
What's the big deal? It seems like everyone involved in the conversation wants to curb the tech. The question is how.
Want more on tech? Listen to the Consider This episode on how social media use impacts teen mental health.
What's he saying? Scharre spoke with NPR's Ari Shapiro on what role legislative regulation can actually play in the development of AI.
On whether Congress can play a significant role in regulating AI:
There is definitely a valuable role for Congress, but there's a huge disconnect between the pace of the technology, especially in AI, and the pace of lawmaking.
So I think there's a real incentive for Congress to move faster, and that's what we see. I think what members of Congress are trying to do here with these hearings is figure out what's going on with AI and then what is the role that government needs to play to regulate this?
On what role the government should play:
There's certainly not a consensus. And I think part of it is that it can mean so many different things. It can be facial recognition, or be used in finance or medicine. And there's going to be a lot of industry-specific regulation.
On whether Congress will take meaningful action on the matter:
A pessimistic answer is we're probably likely to see not very much.
That's been the story so far with social media, for example. But I think, you know, the place where there's value here would really be if we can get just a couple specific kinds of narrow regulation. There was some talk about a licensing regime for training these very powerful models that probably make some sense at this point, given some of their characteristics. And then things like requirements to label AI-generated media. California passed a law like this called the Blade Runner law. I love this term that basically says if you're talking to a bot, it has to disclose that it's a bot. That's pretty sensible.
On the risks of unregulated AI:
One of the risks is that we see a wide proliferation of very powerful AI systems that are general purpose, that could do lots of good things and lots of bad things.
And we see some bad actors use them for things like helping to design better chemical or biological weapons or cyber attacks. And it's really hard to defend against that if there aren't guardrails in place and if anyone can access this just as easily as anyone can hop on the internet today. And so thinking about how do we control proliferation, how do we ensure the systems that are being built are safe is really essential.
So, what now?
Learn more:
Copyright 2023 NPR. To see more, visit https://www.npr.org.