Could AI be used to sway federal rule-making?

Kimberly Adams Apr 17, 2023
Heard on:
HTML EMBED:
COPY
George Washington University's Mark Febrizio argues that tech like ChatGPT could make it easier for more people to weigh in on policymaking. Leon Neal/Getty Images

Could AI be used to sway federal rule-making?

Kimberly Adams Apr 17, 2023
Heard on:
George Washington University's Mark Febrizio argues that tech like ChatGPT could make it easier for more people to weigh in on policymaking. Leon Neal/Getty Images
HTML EMBED:
COPY

It seems you can’t turn around for a moment these days without missing some new development in the world of generative AI, ChatGPT and the like.

Elon Musk is launching his own alternative to ChatGPT, shortly after joining an open letter calling for a pause on further development of the technology. Congress and governments globally are scrambling trying to figure out how to regulate it.

One area where generative AI could play a role here in the U.S. is when it comes to how federal agencies make rules and regulations.

Federal agencies often ask the public to weigh in on how to implement new laws, which is great — as long as it’s a real human or group actually submitting feedback. So what happens when something like ChatGPT enters the mix?

“It’s not necessarily wrong for someone in the public to use AI to generate a public comment, especially if it’s something that the person or the organization agrees with,” said Mark Febrizio with George Washington University’s Regulatory Studies Center.

Febrizio and others argue the technology could encourage more people to weigh in on policymaking — like those who don’t speak English as a first language, for example.

“But there’s also some potential concerns by regulators that agencies will receive a flood of comments that will take more time and more resources to process and to consider,” Febrizio added.

Agencies already use software to screen for comments submitted by bots, but ChatGPT could be prompted to make 500 human-sounding messages for or against a proposal.

Another risk? “ChatGPT is used to make things up that look authoritative and require real, analytical work to review and identify as being potentially not authoritative or not real,” said Dhiren Patel, who serves as president of DocketScope, a company that works with federal agencies to help analyze public comments.

Existing software plus the experts who end up reviewing comments will likely be able to weed out bad actors trying to influence regulations in development, he added. But agencies are still waiting to fully assess the impact of this new technology. 

There’s a lot happening in the world.  Through it all, Marketplace is here for you. 

You rely on Marketplace to break down the world’s events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible. 

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.