Proactively Limiting the Use of AI in the Classroom
By modeling AI, teachers can demonstrate to students the benefits and shortcomings of the technology.
Your content has been saved!
Go to My Saved Content.Every school year, I greet my students with optimism. While I am getting to know them, and helping them get to know each other, I also have to help them understand the academic expectations of our course. With generative AI in the mix, these conversations have changed over the past few years.
To begin with, I include a specific reference to AI in my course syllabus: “ChatGPT and other AI software have grown in prevalence and ease of use recently. Submitting content written by AI software as your own in any element of your work, big or small, is academically dishonest. I cannot assess your proficiency from a submission that is not your own work. Do not cheat yourself out of an education by using AI when you should be doing your own thinking and writing. AI can be a great tool for helping you learn. Together we will explore some of those possibilities.”
I also use a multipronged approach to show my students that they should not be using generative AI in academically dishonest ways.
A proactive approach to teaching about AI
1. I show students my familiarity with AI. Early on, I begin to drop hints that I know how to use AI. I want them to know that I am familiar with the tools and that I know a little more about them than they do.
I want the students to know that I am familiar with the process and outputs of an AI chatbot. I mention the things AI has helped me with lately. I use student-facing AI tools with them like Brisk, MagicSchool, and image generators. Students will think twice about trying to pass off AI writing in my class because they know that I know what’s up.
I use ChatGPT in front of my students pretty frequently. We ask it to summarize things we read recently, suggest research questions, give feedback, and answer random questions. For example, I assigned my seniors to create an FAQ doc about their future career plans. Most struggled to create the questions because they didn’t know what to ask. Eventually, I opened ChatGPT and, with the help of a student’s career plan, asked it, “What are some questions I should research if I want to become a chef?” It returned 20 great questions, and my student had to do the work of narrowing those down to 10 and then researching the answers.
2. I show students that generative AI can be inaccurate and unreliable. I want students to understand that if they blindly rely on an AI tool, it may not be as helpful as they were hoping for. Our interactions with ChatGPT are unhelpful almost as often as they are helpful. When we get inaccurate, incomplete, and off-base responses, I gleefully point out that sometimes the AI goes hilariously astray. I remind them not to trust the AI to get things right all the time.
My seniors were shocked when we asked ChatGPT to summarize an article we had been reading. We gave the chatbot the title and author, and it confidently spit out a summary that was completely wrong. We adjusted our prompt by adding the publication year, and the second summary was only slightly better.
3. I show students that it is pretty easy to identify AI writing. To show students that AI writing is easy to spot, I have to be patient until the day a student turns in a paragraph written by AI. I then take that AI-written paragraph, add two other original paragraphs to the page, and print out 18 copies.
I say nothing about AI and ask students to work in pairs to score the paragraphs using the rubric, a really good exercise for them in itself. After a few minutes, they want to know if their scores matched mine.
I then tell them I want them to identify which paragraph was written by AI. With their focus shifted, they all quickly narrow in on the AI-written paragraph. Conveniently, this has always included the student who turned it in. They are very proud of themselves when I congratulate them on finding the AI-generated text.
Then I pause and say, “You know, if you can tell, I can tell,” and they realize that the lesson was really about how easy it is to spot unoriginal writing. Without naming anyone, I also let students know that the person who turned in that paragraph just flagged their own submission as AI. I suggest that anyone wondering if this paragraph might be theirs should speak to me after class.
Reinforcing these lessons through assignments
Once my students know that I am knowledgeable, that AI is fallible, and that AI writing is easy to spot, the temptation to use it seems to drop a lot. By then, they are usually about to start their first large piece of writing for the year, so I take one more proactive step. I give them a template doc for their assignment with the directions and any links they may need to support their writing process.
I remind them to do all their work for that assignment in that document. I include this reminder at the top: “Do all of the writing for your assignment in THIS document. It is the best way to show your writing is all authentically your own.” Here I create a sense of accountability. Because the docs are shared with me, I can see early if a student is having trouble keeping pace with the development of the assignment. I know a student is more likely to resort to using generative AI if they are struggling, so being able to provide support sooner is helpful. By requiring students to do all their writing on a single document, I can see their version history and more easily look at their writing process if I have any doubts about the authenticity of their writing.
The way we use and talk about generative AI in our classrooms will have a huge impact on the way students use it. Convince your students that you know more than they do, that AI will fail them, and that you’ll easily be able to tell when the writing is not their own, and you’ll build a proactive foundation for academic honesty in your classroom.