Donate today and get a Marketplace mug -- perfect for all your liquid assets! Donate now
States and schools are learning how to manage AI in education
Mar 12, 2024

States and schools are learning how to manage AI in education

HTML EMBED:
COPY
Tools like ChatGPT are being used in classrooms, and districts and educators are embracing it, in some cases collaborating with state governments. "We can't stop the evolution of the technology," said Bree Dusseault of the Center on Reinventing Public Education. "States ... need to help create the guardrails."

It’s been about a year and a half since ChatGPT hit the scene and changed the world of education, leaving teachers scrambling to adjust lesson plans and grading policies. Currently, only a handful of states are providing guidance on how artificial intelligence should be used in the classroom. Just five have official policies, with about a dozen more in the works.

Bree Dusseault at the Center on Reinventing Public Education at Arizona State University has been following all this. The following is an edited transcript of her conversation with Marketplace’s Meghan McCarty Carino.

Bree Dusseault: A year and a half ago, the conversation was more about plagiarism and banning ChatGPT. But the reality is that it’s not going away, and it’s not something that you can really ban in the same way you can’t ban the internet. Students are going to have access to it, teachers are going to have access to it. So the conversation is shifting now to how — how do we actually use this in a way that takes advantage of what it has to offer, but it’s also safe and ethically protects students and teachers?

Meghan McCarty Carino: So states that have put out some sort of guidelines are California, Oregon, West Virginia, North Carolina and Washington. What do the policies look like?

Dusseault: They have some areas of similarity. All of them are posing questions about how do you protect students? What are safe and acceptable uses of policy? How might we be supporting teachers or educators or school leaders in learning to use AI? So if you look at how the states are framing up plagiarism, for example, West Virginia’s policy, which is pretty thorough, acknowledges that students will be using ChatGPT or other AI-enabled tools, but really names that schools need to be careful in continuing to apply their policies around cheating and plagiarism. North Carolina, which released another pretty comprehensive report, has a pretty different section that is literally titled “Rethinking plagiarism in the age of AI.” And so what they do is they actually propose some different frameworks, like a red-yellow-green structure for determining where assignments actually allow or might even encourage the use of AI and others might prohibit it.

McCarty Carino: Why is it important to have broad, state-based guidelines around this? Why not just rely on individual districts or classrooms to set their own?

Dusseault: States have this unique power to convene and to influence and support, and they have the power to build more aligned or coherent approaches to the use of AI. And they have the ability to make a more proactive stance happen. And so there really is a role. There’s a reason why we have states playing a role in education. And it’s really for cases like this, where new things come up and on-the-ground leaders are asking for help. That is when states want to step in.

McCarty Carino: Is there a risk in the haste to catch up to this fast-moving technology that guidelines are too restrictive?

Dusseault: You know, it’s quite possible that the advent of AI will require new ways of thinking about policy setting. And we’re seeing this in the early state guidelines that are coming out. There’s a level of flexibility that is required for this, and policies may need to actually defer to setting bright lines and pretty clear guardrails, but then allowing flexibility and adjustment within that. What we can’t stop is the evolution of the technology and how fast it is. And it’s wild to think that the AI that we have today is the most basic form of AI that is coming, and that years from now, we will have actually things that we can’t even predict today happening. States have to take that into account. But they also need to help create the guardrails so that districts can flow within it. We may actually see policy setting that is more responsive, is quicker and adaptive to the moment, almost like testing and adapting as it goes. That is not always how tech policy gets set. So in a way, AI may force not just students to learn differently, but adults to learn differently about how we put the rules in place to protect and enable our kids to use this moving forward.

McCarty Carino: So what would your ideal AI-in-the-classroom guidance look like right now?

Dusseault: I think that a lot of teachers are craving guidelines. So I do think that there needs to be support provided and recommendations. But teachers also need flexibility to try it out and to practice. So I think that a lot of what educators would like and are benefiting from in systems that are doing this is a combination of “Hey, here’s some recommendations, here’s a suite of supports or tools that have been created by experts for you, and then go play around with it and report back and see what works.”

More on this

We talked about the initial concerns over students using ChatGPT to cheat and how the thinking about that is evolving. Stanford researchers say it doesn’t look like these tools are increasing the frequency of cheating. They’ve run a long-term survey asking high school students about cheating, which is generally pretty common. About 60% to 70% of respondents engage in some sort of cheating behavior, but that share actually dipped a little in 2023, after ChatGPT became widely used.

Back in 2022, we spoke to Daniel Herman, a high school English teacher who was starting to grapple with how ChatGPT would change his profession. In August of last year, he wrote in The Atlantic that he’ll be conducting his English classes more like a book club, asking students to forego rote explication in favor of relating their readings to their own lives, something that’s harder to ask a bot to do.

Other teachers interviewed last year by The Markup are actively incorporating the tool into their lessons. Philadelphia geometry teacher Sean Fennessy, for example, uses a paid version of ChatGPT to create a fictional universe he calls Fennlandia, where students must code their own small applications using clear, detailed instructions — the essence of geometry.

The Center on Reinventing Public Education noted in its report that an AI-friendly educational setting could have implications for a state’s future workforce, which undoubtedly will need to be at least AI-familiar, if not, indeed, friendly.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer
Rosie Hughes Assistant Producer