Digital illustration of AI prompt
dem10 / Getty
Technology Integration

Setting Ground Rules Around Original Writing and ChatGPT

Generative AI tools like ChatGPT have the power to revolutionize education, but educators must first wrestle with weighty ethical and practical concerns.

October 6, 2023

Your content has been saved!

Go to My Saved Content.

Michelle Zimmerman can’t predict the future. But a few years ago, when researching her 2018 book Teaching AI: Exploring New Frontiers for Learning, she met a handful of people who could. Speaking with artificial intelligence experts, some of whom had been in the field since the 1960s, she learned in hushed whispers about a conversational AI chatbot being developed to respond to queries with remarkable speed and fluidity. Ask a question, get a succinct and polished answer on demand. Request a five-paragraph essay on To Kill a Mockingbird and read it in seconds, thesis statement and all.

Zimmerman realized such a tool would represent a quantum leap for education when it appeared. So she got to work. Without a name or even a particularly clear timeline, she began imagining a world where AI had totally upended teaching and assessment as we know it. Since she couldn’t create effective lesson plans or test the writing capabilities of a piece of software she’d never seen, Zimmerman began wrestling with big questions like these: What does it mean to create something original and unexpected when AI is a contributor? When is it ethical to ask AI to assist with an assignment like writing an essay or submitting a science report? And when, to put it bluntly, is it just cheating?

To figure it out, she convened a focus group of high school students at Renton Prep, the private school outside Seattle where she serves as executive director. If nothing else, it would get her students thinking about the big ethical conundrums around writing and AI awaiting them in college and beyond. “I figure it does not do much good if you’re an adult saying, ‘Oh, we won’t accept that assignment because it’s plagiarism,’ if you don’t discuss it with students,” she says. 

Late last year, Zimmerman’s planning was put to the test when the world was introduced to ChatGPT, the generative AI chatbot she’d heard about years earlier, developed by OpenAI, a nonprofit founded in 2015.  Released to both rapturous and apocalyptic reviews, ChatGPT was initially heralded in the press as a death knell to the student-penned essay and a ready-or-not educational revolution. By February, it had 100 million monthly active users, becoming the fastest-growing consumer application of all time. By May, one Common Sense poll found that more than half of kids over the age of 12 had tried it.

As schools enter for the first full year in a post-AI world, many are grappling with the same types of concerns that Zimmerman and her students have been working through. Namely, how do you set ground rules that acknowledge AI while spelling out parameters for how it can—and cannot—be used in schoolwork? 

A FIRST TAKE AT DRAWING BOUNDARIES

The same immense processing power that makes ChatGPT such a useful tool for learning also makes it a particularly tempting vehicle for cheating, mainly through passing off blocks of generated text as original work without attribution. That’s left districts and schools scrambling to create comprehensive academic integrity policies that spell out how (or if) students can use ChatGPT responsibly. 

As part of its guidance on AI, Carnegie Mellon’s Eberly Center, which provides teaching support for faculty, shared a handful of example syllabus policies touching on several schools of thought. Instructors might choose to ban generative AI tools outright, with violators facing consequences akin to those for plagiarism of any form. But they might also create policies that fully permit the use of generative AI, as long as it’s acknowledged and cited like any other source. A third option is more nuanced—neither a free-for-all nor a knee-jerk ban. It lets teachers permit AI use for certain activities, such as brainstorming and outlining, or special assignments, such as ungraded ones, but forbid it in all other contexts.  

Given how fast AI is evolving, developing a comprehensive policy around safely using AI is challenging, though not impossible. 

After researching existing guidance from all over the world, Leon Furze, a British-Australian educator pursuing a doctorate in AI and writing instruction, recently penned a template policy specifically for secondary schools. One of the first of its kind, Furze’s document provides a framework for how educators can think about the bright red lines that must be drawn around AI use. Its various sections run the gamut from data privacy, access and equity, and academic integrity to assessment and even professional development, proposing lines of inquiry that schools can explore to create their own unique policies. Take a section on citations and references, for instance, which asks schools to consider three key questions:

  • How can AI-generated material be appropriately cited and referenced in research and writing?
  • What guidelines will be provided to staff and students regarding the appropriate citation and referencing of AI-generated material?
  • What tools and resources will be made available to support appropriate citation and referencing of AI-generated material?

If you’re looking for a copy-and-paste formula for how to deal with plagiarism or other topics, you might be better served asking ChatGPT directly. You won’t find it here. As Furze explains in an introduction, “The suggestions here should form part of a wider discussion around updating your existing cyber/digital policies, and should involve members of your school community including parents and students.”   

Since students will be most impacted by the new rules, it may be worth broaching the subject with them directly. This year, Kelly Gibson, a high school English teacher in rural Rogue River, Oregon, best known for her thoughtful education takes on TikTok, is speaking plainly with her students about using AI responsibly. While her district is still ironing out its own guidance, she plans to explain some commonsense ground rules. Students must always receive permission before using AI, and they should know the consequences of being caught cheating. Over time, as students gain more experience with AI tools, she hopes they’ll realize for themselves why its impersonal tone and track record of distorting or inventing facts makes it unsuitable for generating long-form writing.

“There are frequent errors because it’s a word predictor,” she says. “If all a student is going to do is put in the prompt the teacher gives them, there is a high probability that they’re going to get a very simplistic paper.”

BRAVE NEW WRITING

In response to concerns that schools were losing in the battle to keep tabs on student originality, this April, four months after the release of ChatGPT, the plagiarism detection company Turnitin released its own highly anticipated AI solution. For decades, the company’s standard offering has checked student writing against enormous databases looking for what the company describes as “similarity,” which may or may not amount to actual plagiarism, depending on context like quotation marks and proper attribution.  

With the new update, customers still receive the same similarity rating for a submitted paper but now also receive a “Level of AI” score that examines each sentence and generates a probability score indicating how much text it believes was generated by AI. The software, like all AI detection, is still in its infancy and is far from exact science. The company claims its false positive rate is less than 1 percent, but some independent checks on early versions of the software found a much more frequent rate of errors, particularly for English learners, leading some researchers to call it “unreliable.” 

So do AI checkers work? “In short, no,” reads a portion of OpenAI’s website, clarifying that no tool has yet been able to “reliably distinguish” between human- and machine-generated text. To that end, a number of colleges, including Vanderbilt, the University of Pittsburgh, and Northwestern, aren’t using them at all. Still, Turnitin says it has analyzed a massive 65 million papers since April of this year, flagging 3.3 percent for containing at least 80 percent AI writing; around 10 percent of the papers it’s processed featured over 20 percent AI writing (though the software’s accuracy may decline the less AI writing it detects and as AI writing itself becomes more human sounding).

Taken together, these early figures indicate that students are already using AI tools in their work—though probably not overwhelmingly. That puts educators in an awkward position. “I don’t want to spend my entire year hunting for examples of AI writing and looking for cheating,” says Marcus Luther, a high school English teacher in Keizer, Oregon. “One, I don’t trust myself to be successful at that, and two, I don’t trust the tools. And most importantly, I don’t want to take that mindset into how I read student work. I want to set expectations, but I also want to be affirmative in how I look at students’ writing.”

WHAT WOULD SHAKESPEARE SAY?

Beyond black and white issues like plagiarism, it will be difficult to create a blanket set of rules at the start of the year, simply because the technology is changing so quickly. Google is currently beta testing a generative AI tool, called “Help me write,” that will integrate its Bard AI technology directly into Google Docs. With a few keystrokes, students will be able to generate a few paragraphs’ worth of material inside the word processor they’re already using. The new feature has the potential to change how we approach writing, normalizing AI output as a starting point. The blank page, once the bane of even mature writers, may soon seem as quaint as the slide ruler. 

Dialogue may already be as important as policy. “I’m very much unsure of what the process looks like in terms of them forming their own original writing,” says Luther, “so I think it’s really appropriate to have conversations with students about how they feel about AI.” Now, he plans to ask his students to consider the murky ethics of AI and what choices they would make in his shoes. As teachers, when and how would they let students use AI? Would they consider a poem or novel created using a generative AI tool to be wholly original? And what is being lost if we use AI in place of thinking for ourselves? “I want to, as much as possible, be transparent in bringing the philosophical issues into the classroom with humility,” he says. “I don’t want to pretend like I have answers that I don’t.”

Recently, Zimmerman conducted a similar thought experiment with the students in her focus group. Following a conversation on Shakespeare, she asked them to use ChatGPT to play around with generating love letters—an intimate subject to most teenagers. As they were having fun injecting humor and emotion into their letters, she dropped a sly question: What if you got a letter from someone you liked and began to question whether it was from the heart or generated by AI? 

“There was this little gasp that came across the kids, and they looked at each other, because it’s one thing if you talk about content that wasn’t original to them, and it’s an assignment that they turn in,” she says. “But when it’s very personal and it’s something that they want to know is real and unique, it hits them in a different way.”

THE HUMAN TOUCH

For Gibson, the high school English teacher, her in-class AI discussions will have to wait a few weeks while she reviews the fundamentals of critically analyzing a text and forming a strong argument. “What I’ve found with thesis creation is that very often kids have an idea of what they want to talk about, but they don’t know how to write it as a thesis statement,” she says. 

Gibson envisions letting students use a tool like ChatGPT to refine, but not create, their arguments. Typically, she asks students to complete a custom graphic organizer in class to deconstruct the parts of an essay and build their argument before writing the final version at home. “You could potentially look at the final essay and not worry about whether ChatGPT was involved because you saw what students were able to put into the graphic organizer from the get-go,” she says. She often loads her organizers with detailed and specific parameters that require students to interact with the assignment in meaningful ways. “For anybody to get anything above a D, they’re going to have to do a lot of interacting with whatever ChatGPT spits out.”

Once students master the basics of argumentation, they rarely need such scaffolds. Then the goal becomes turning them into more competent—even joyful—writers by making them care about the work they’re producing, explains Katy Wischow, a staff developer at the Reading and Writing Project at Columbia University’s Teachers College. “When there’s an authentic purpose to writing… it doesn’t feel like busy work,” she says.

That tracks with a philosophy that Zimmerman has been trying to impress on her students for years—namely that exploring their lived experiences, cultural backgrounds, and views of the world is crucial to their education. Their stories are something AI can never replicate, but the technology might help sharpen the finished product. Recently, a student who is half Indian and half Pakistani used ChatGPT to brainstorm and refine questions to ask her parents about decades-old ethno-national tensions that are typically never spoken about. In the process, she learned about generational trauma, which sparked several meaningful prompts she can explore in her writing. 

To some of Zimmerman’s students, this is the true opportunity in AI—not as an instant-gratification homework machine, but as a resource they can tap to help them create the kind of deeply personal and expertly polished work that matters to those around them. Not long ago, Zimmerman asked another student, “What is it you wish AI will accomplish?” She found herself unprepared for his answer and more than a little crushed. “He said, ‘I hope AI will help our teachers actually want to know us better.’”

Provided teachers develop this intimate knowledge of their students as writers, and AI is welcomed into the process as a subordinate partner, perhaps we won’t be talking about counterfeit work as much as we think. 

Share This Story

  • email icon

Filed Under

  • Technology Integration
  • ChatGPT & Generative AI
  • English Language Arts
  • 6-8 Middle School
  • 9-12 High School

Follow Edutopia

  • facebook icon
  • twitter icon
  • instagram icon
  • youtube icon
  • Privacy Policy
  • Terms of Use
George Lucas Educational Foundation
Edutopia is an initiative of the George Lucas Educational Foundation.
Edutopia®, the EDU Logo™ and Lucas Education Research Logo® are trademarks or registered trademarks of the George Lucas Educational Foundation in the U.S. and other countries.