There's just a few days left to snag some Marketplace swag at a discount when you... Donate Today! 🎁
What happened when an entire class of college students had ChatGPT write their essays
Jun 23, 2023

What happened when an entire class of college students had ChatGPT write their essays

HTML EMBED:
COPY
A professor at Elon University assigned his students essay prompts to feed to ChatGPT, but the grades the chatbot received were not great.

The chatbots are out of the bag, and educators are scrambling to adjust. Chris Howell, an adjunct assistant professor of religious studies at Elon University, told Marketplace’s Meghan McCarty Carino that as the year progressed he noticed more and more suspiciously chatbot-esque prose popping up in student papers.

So rather than trying to police the tech, he embraced it. He assigned students to generate an essay entirely with ChatGPT and then critique it themselves. The following is an edited transcript of their conversation.

Chris Howell (courtesy of Howell)

Chris Howell: The idea was the regenerating essay based on the prompt, they would grade the essay kind of like they were the professor. And so they would have to leave five comments or more on the Word doc. And then I had a series of questions about did this fabricate any sources? If it did, how did you find them? Did it use any sources that were real, but incorrectly, to try to understand the way that this thing works? I figured that there would be a likelihood that most of the essays would at least have some problem, but I didn’t think all 63 would have confabulated info, that surprised me, too. I mean, maybe I should have expected it because they were all using the same prompt, but I didn’t.

And his students, it turned out, were pretty tough graders.

“I failed the assignment,” said Cal Baker, a rising sophomore at Elon University.

“I didn’t have it in my heart to fail [the chatbot]. I gave him a D,” said Fayrah Stylianopoulos, another rising sophomore at Elon University. Both students said they found the assignment eye-opening.

Fayrah Stylianopoulos: I could tell that a human would look at the prompt, see the nuances in it, and use their own experiences to like, you know, come to not an answer, but like a line of reasoning. And the chatbot kind of seemed like it was looking for what the right answer was, and just spitting it out. It was using the same language from the prompt over and over again, it was hallucinating sources that looked like they could be plausible, but then you found out that they weren’t. So it was almost like it was trying to think, ‘what is the right answer,’ instead of, you know, a human would like actually form an opinion, you know, even if the opinion had some levels of nuance or complexity to it. And I don’t really want to live in a world where looking for the right answer is rewarded over critical thinking, over actually forming opinions that are genuine.

McCarty Carino: And Cal, what was your experience? What happened with your prompt?

Cal Baker: It was a horrible essay. And I left so many like comments as if I were the professor. I was like this, just overall, this sucks. Like, it was bad. It was very interesting to see which sources and like what kinds of sources that use. One of them. It was an article, a chapter that this author had not written, but was a real title just attributed to someone else. And the other I think, was a completely made up article, but at least this seemed like a perspective that that author might have taken. So that was like, sort of realistic, but definitely not a real source. 

Cal Baker (courtesy of Ava Crawford)

McCarty Carino: Have either of you talked about your experience with this assignment and with ChatGPT, you know, with your peers who might be using it in inadvisable ways?

Baker: I certainly have. I don’t think most of my close friends would use it on assignments. But I’ve certainly talked about it to them and said, like I was shocked with how terrible this essay was, I talked about it with like my writing center friends. So yeah, it’s been very interesting to have conversations with others about it.

McCarty Carino: Fayrah what have you told other folks about your experience?

Stylianopoulos: My friends have all been fascinated by this assignment, and surprised to learn the results of it. And I think doing an assignment like this, or even hearing about it secondhand from someone else can help people kind of realize the cracks that are in the use of chat boxes like this, and ways we shouldn’t be using them and should be.

McCarty Carino: And Professor Howell, what did you take away from this assignment? Did you consider it a success?

Howell: I did, I felt like the biggest takeaway was that a lot of educators are concerned about students might using this to cheat, they might use it to write all their papers, they might over-rely on it to such an extent that they’re not going to, you know, exercise their minds the way that they need to when you’re writing an essay. And my kind of hope with this is that really doing some in-depth training with it like this may actually help discourage students from over-relying on it, because they can see now a bit more about its drawbacks. But I’m hopeful that maybe other educators, it could help maybe encourage students to be more confident in their minds and their abilities, to use this technology more responsibly, and also to maybe not rely on it as much as a lot of professors might be afraid that they will.

McCarty Carino: Fayrah, you’ve got your career ahead of you, you’re preparing for a job market that could be affected by this technology. I mean, what are you thinking about the future and how this fits in?

Fayrah Stylianopoulos (courtesy of Stylianopoulos)

Stylianopoulos: I think that there is a very prevalent myth now that this is starting to become prominent, which is that the human mind is at all like comparable to AI, and to machines. And I think that we need to work together to disprove that, and really, like help students and just individuals as they’re using it to realize that they’re capable of more than what it can provide. And if we do that, then I think we can use this technology in a helpful way. And I say that cautiously. Because a lot of my other interactions with this have been like watching videos of staff from Google or Microsoft, talking about how this is going to change the world in such a positive way. And it can only be a force for good. And I’m very cautious of people who say those kinds of things, especially from that background, but I do believe it has good implications, even in education.

McCarty Carino: And Cal, how are you feeling?

Baker: Yeah, I think mixed feelings. I definitely don’t believe in like the whole, like, I guess the Skynet kind of thing where it will gain awareness, self awareness and overthrow everyone. But I am concerned about the implications for students. And I think what a lot a lot of people don’t realize, a lot of students don’t realize, and I talked about this a lot, is that the material and the assignment itself are rarely important. And what’s important is what’s happening in your brain when you are completing that assignment. And you mentioned, like, the job network. And I think that’s something that if someone is a young person is trying to get a job in the profession, they’re looking to go into that they are potentially only thinking about, like, ‘What can I say to get this job?’ And not like, ‘Do I actually have the skills required for this,’ and I’m not worried for myself, I want to go into entomology, which is a super niche situation. So I’m not too worried about that. But definitely for much larger things like computer software, like there’s a skill set required for that highly competitive. And people could be using artificial intelligence to sort of try to gain the upper hand in the job searching area.

And I think that it’s just very important for folks to keep in mind, like what’s actually important about doing an assignment, writing an application or resume. And that’s like thinking about what’s important. What’s, what’s valuable about doing those assignments is what’s going on in your brain and getting you ready for future opportunities, future analysis, etc. So I’m hoping that that can be sort of a more widespread feeling that people have and keep in mind as we continue to experience AI.

McCarty Carino: So you’ve been using this tool kind of experimenting in the context of Religious Studies, which I find interesting, just, you know, thinking about how humans construct systems of meaning. How does this kind of fit in to you, Professor Howell?

Howell: I actually have always studied religion and science and to an extent technology is my — I mean, that was what my PhD was in, my dissertation topic was on religious rejections of Darwinism. And so, to me, all these things are very interconnected. Much of what we talked about the way AI is affecting our perception of ourselves as humans is, to me the big kind of sticking point where religious studies and technology, studies of technology intersect with each other, we read, we read a rather dense reading in the class about there was an older reading from the 60s about how the most important thing about according to this author, the most important thing about, that made humans distinct was not tool using, which is something we often think like we’re the we’re the creatures that create and build tools. And this author argued, was like, ‘Well, you know, I mean, sort of beavers sort of, other great apes and stuff, like tools are used by other animals, bees, and even insects.’ And so what’s interesting about humans is using language. So when something like ChatGPT just roars onto the scene, and most people have never experienced this kind of thing before, it can be really shocking to see that because that is something that we’ve identified as being our unique characteristic. There’s this, there’s a lot of fear about AI.

Whenever we first started talking about it in class, there was tons and tons of fear. And a lot of students had not even encountered it remotely, or at least known that they did. We actually did this assignment about a week after the Snapchat AI first showed up. And for a lot of students, that was the first encounter that they had with it. And so there’s a lot of alarm, I got a lot of comments on some of the readings about is AI going to take over? Is it going to rule the world? Is it going to do some crazy thing because it’s just this new thing. And we’re so conditioned by science fiction, which I’m a fan of, you know, but we’re conditioned by it that kind of interpretive is like this is what’s going to happen. I am a little bit concerned that the fear about that is overshadowing the real immediate day to day fears about misinformation about misunderstanding and the technology, about technological unemployment, about political extremism, the stuff that AI is already impacting now. And that’s where I think we really need to think about how it’s affecting our community. And that always kind of will be a religious question, how it affects your communities, how it affects quality of human life, what human life means.

McCarty Carino: I mean, how are you feeling about this technology, and it’s kind of, you know, potential use in the future in general?

Howell: it’s hard to know what to say the future will hold with this. But I’m, I am worried about the way this technology is coming so fast and rolling out so quickly, before people can properly adapt to it or understand it. But at the same time, I don’t think the solution realistically can be to just not use it. And it does have its uses and it is an incredible piece of technology when you think about what it’s doing. And when you’re when you’re engaging with it. And you’re not trying to get it to write you know, an undergrad essay, but you’re engaging with it for whatever use you’re trying to especially like, you know, my friends who are programmers, like they talk about how it’s really useful when it comes to coding. I mean, you have to be careful with it, but it can be very useful.

The future of this podcast starts with you.

Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.

As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.

Support “Marketplace Tech” in any amount today and become a partner in our mission.

The team

Daisy Palacios Senior Producer
Daniel Shin Producer
Jesús Alvarado Associate Producer