ChatGPT & Generative AI

When Students Use AI in Ways They Shouldn’t

Here are some ways teachers can respond when students don’t follow classroom guidelines for using AI.

April 4, 2025

Your content has been saved!

Go to My Saved Content.
Josh Cochran for Edutopia

Like many teachers, I sometimes notice my unease growing as I read a piece of student work. Word by word my suspicion builds, and then all at once I understand. This is an artificial intelligence (AI)–generated text and not something my student actually wrote. Now what do I do?

I’ll be real here. My next move depends on a number of factors, and I take into account everything I know about the context, the student, and the assignment. Is this a first offense or part of a pattern of abuse? Is it a minor assignment or a major process writing project? Has the student had recent absences? Can I absolutely verify that this writing is not original, or is there room for doubt? What did a few AI detectors say? (I know not to trust them, but I still find the results worth noticing.) All of this clicks through my mind as I form a plan of action.

Broaching AI use through an open-ended conversation

I prefer to use what I think of as a Ted Lasso approach to these conversations: Be curious, not judgmental. The more I can learn about why a student used AI in an academically dishonest way, the more knowledge I will have to be able to help this student and others in the future. So, when I talk to students who I know have turned in AI writing as their own work, I almost always start with the same question: “Tell me what happened here?”

I almost never accuse a student of using AI specifically. I like open questions and statements like these: “Do you know why I asked to speak with you?” “How are you handling the workload in English this year?” “Tell me what was challenging about this assignment for you.” And then I let them talk. The less I say, the more I learn.

About 75 percent of the time, these simple questions result in a student telling me that they used ChatGPT and the work is not their own. Once the student has admitted their error, I can ask follow-up questions like “Why did you think you needed to do that?” I listen and we talk through what other options they had available to them. Depending on the context, I may or may not allow the student to resubmit the work.

When students deny using AI

Then there are the students who continue to claim the work is their own original writing, insisting they did not use AI at all. That response is always telling, because I didn’t make an accusation—I just asked what happened here. I also let those students talk, and I ask more open-ended questions like “Tell me about your process” or “Say more about your thinking about…,” and then I name something they wrote about in the assignment. Students who did not use AI respond to these questions by telling me about their thought process, saying who helped them with the assignment, or asking a question about the directions.

Here are some options if you are not absolutely sure the work is AI generated:

  • Give an alternate assessment of the material and try some strategies for promoting authentic writing.
  • Do some assignments on paper in class to gather examples of your students’ authentic writing for future comparison.
  • Give the student the benefit of the doubt, but add the student to your personal watch list. You won’t catch every instance of AI-generated writing, but you can know which of your students need more frequent checks.
  • Start having proactive conversations with your students about AI. 

When I still suspect a student used AI for the assignment, despite protest to the contrary, I have a few options. I can play back their version history (we are using Google Docs) and show them the results of a detector, or find another way to assess their knowledge of the text. Recently, several very stubborn deniers admitted their AI use when I asked them to complete a cloze activity created from one of their paragraphs.

I should add that these conversations are not always private. My classroom lacks an office space, and the busy hallway makes truly private conversation during class time difficult. Even at lunch, there are usually other students around. Students often have follow-up thoughts or questions, and they restart the conversation in front of their peers. I see no reason for secrecy as the default in these conversations. Asking what happened is not an accusation. It’s similar to asking about homework or why they don’t have their book. My students know that just because I am pointing out their mistake does not mean I am judging them as a person. So, I find that these conversations benefit from a little daylight. Of course, I know my students well enough to realize when a conversation needs to be postponed or moved to the hallway.

When students admit to using AI

My main goal, besides learning as much as I can for myself, is to make this conversation a learning experience for my student. I want to help them understand their mistake and why it hurts them when they don’t do their own writing. So we look for the events and decisions that lead to their submitting something written by AI. Was it an issue of time, assignment confusion, task avoidance, or something else?

I also ask what their specific process was—which AI they used, how they prompted it, their thoughts about the results, and if they used any other tools, like a “paraphraser” or “humanizer.”

Once a student admits to using AI, I keep asking curious questions: “What made you decide to do that?” “How could I have helped you sooner so you could do the writing yourself?” “How did using AI impact your learning for this assignment?” “What’s the plan for future challenging situations?” I want my students to have a plan for handling tough academic moments, because there will be more.

We are facing the brain chemistry of teenagers, natural risk takers. Some are chronically poor at time management and often are overscheduled as well. Inevitably, the temptation to use ChatGPT or another AI to complete an assignment will lead at least a small percentage of students to try passing off AI writing as their own work. How I handle those conversations about that choice will have a huge impact on my future relationship with that student, their future writing, and how we help each other moving forward.

Share This Story

  • bluesky icon
  • email icon

Filed Under

  • ChatGPT & Generative AI
  • Literacy
  • Technology Integration
  • English Language Arts
  • 9-12 High School

Follow Edutopia

  • facebook icon
  • bluesky icon
  • pinterest icon
  • instagram icon
  • youtube icon
  • Privacy Policy
  • Terms of Use
George Lucas Educational Foundation
Edutopia is an initiative of the George Lucas Educational Foundation.
Edutopia®, the EDU Logo™ and Lucas Education Research Logo® are trademarks or registered trademarks of the George Lucas Educational Foundation in the U.S. and other countries.