ChatGPT, the artificial intelligence-powered chatbot from OpenAI that’s been taking the internet by storm, has raised a lot of questions. Like, what could this mean for society? For art? For the future of human jobs?
But one thing became immediately clear: Students are going to use it to cheat on their homework.
That’s created a market for software that can detect text that was generated by artificial intelligence, like ChatGPT.
Marketplace’s Meghan McCarty Carino spoke to Stephanie Hughes, Marketplace’s education reporter, about what teachers are saying the technology means for them.
The following is an edited transcript of their conversation.
Meghan McCarty Carino: When did it dawn on teachers that ChatGPT is going to be a big part of their world?
Stephanie Hughes: Pretty much right away. It was immediately clear that it was going to be what the tech world calls a disruption. I spoke with one teacher at a public high school in suburban Chicago, and he told me that he saw at least eight instances in just a three-day period of students submitting work generated by ChatGPT as their own. Obviously, teachers want students to be writing themselves and struggling and learning and not delegating that work to a robot.
McCarty Carino: How do teachers know when kids are using AI? Are there tools to help them figure that out?
Hughes: Kind of. Lots of educational institutions in the United States — both colleges and school districts — already pay for anti-plagiarism software, which will basically compare a student’s work to anything that’s already on the internet or papers that other students have turned in. There’s also a lot of for-profit companies that are working very hard to release, basically, “AI catchers” to give to school districts so they can identify AI-generated text. There are also some free tools out there, including one from OpenAI, the company that makes ChatGPT. Just last week, OpenAI launched a tool called the AI text classifier, which will basically give an estimation of how likely it is that a piece of writing was generated by an AI. There are a lot of caveats that they give, however, including perhaps most notably for school districts, that it’s not very good at examining texts written by children because the tool was trained on content written by adults.
McCarty Carino: That seems like kind of an important thing for these tools to have. Are teachers finding them useful?
Hughes: The teachers I spoke with say these tools will likely act as a deterrent. But also that any experienced teacher who’s worth their salt can spot when a kid is cheating this way because AI and teenagers just don’t write in the same kind of way. Teachers are also interested in how ChatGPT will fundamentally change what they need to teach. I spoke with Kim Williams. She’s an English teacher at Hinsdale Central High School in suburban Chicago. She told me about some of the questions she’s asking herself.
Kim Williams: What do they still need to know, and what might they not need as much practice at anymore now that ChatGPT is there? We’ve been doing a lot of thinking and having a lot of conversations about how will this change instruction. There’s new technology in front of us, and it’s only going to get better. We didn’t ban Google and we didn’t ban calculators. We don’t ban new technology completely. We work with it.
Hughes: It’s teachers’ jobs to prepare kids to work in the world, and that means preparing them to work with AI.
Related links: More insight from Meghan McCarty Carino
You can hear more of Stephanie’s reporting on how schools are thinking about ChatGPT here.
In one of her stories, she quoted a classic of children’s literature — “Harry Potter and the Chamber of Secrets“: “Never trust anything that can think for itself if you can’t see where it keeps its brain.”
For you Harry Potter fans out there, the quote is from Mr. Weasley, who says it near the end of the book to his daughter, Ginny. Spoiler coming up — Ginny becomes possessed by Lord Voldemort, who communicates with her through a diary that writes on its own.
Sounds eerily familiar.
Also, I took a look at the AI text classifier from OpenAI — which will give users an estimate of how likely it is that a particular piece of writing was generated by AI. I put in the text of a Marketplace article — actually the AI story by Stephanie. The text classifier said the piece was “very unlikely AI-generated,” so check mark there.
Then I gave it an essay that ChatGPT had written itself with no changes. The text classifier said it was “possibly AI generated.”
In fact, OpenAI admitted that — even in its own tests — the software correctly identified only 26% of AI-written texts. So it’s “not fully reliable,” which is also an issue with ChatGPT itself.
As Stephanie noted in her piece, sometimes ChatGPT has so-called hallucinations. That’s when the chatbot just makes things up.