Illo of profile with numbers
Roy Scott / Ikon Images
ChatGPT & Generative AI

How Districts Are Navigating the Era of Artificial Intelligence

While some districts hesitate, others are embracing generative AI in its infancy by providing hands-on lessons to students, training teachers, and developing clear parameters around its use.

November 10, 2023

Your content has been saved!

Go to My Saved Content.

When ChatGPT debuted last November, Danny Robertozzi, superintendent of Clifton Public Schools (CPS) in New Jersey, moved quickly to ban the service on school devices—as did many district leaders across the country. “We have to stop this. This will be the end of education as we know it,” Robertozzi said, describing his thinking. 

But by the spring of 2023, it was clear that artificial intelligence wasn’t going away anytime soon. Increasingly, teachers and students in Robertozzi’s district clamored to use the tools, making him wonder if he was doing them a disservice by restricting access to a powerful new technology. Robertozzi, and many other district leaders who initially blocked AI, decided to change course. 

For CPS, that meant opening up a handful of AI tools for limited teacher use, with a longer term goal of allowing wider student access once more is understood about the risks around issues like cheating, data privacy, and copyright liability. “We want to be open, yet vigilant,” said Janina Kusielewicz, the assistant superintendent of curriculum and instruction at CPS. 

This cautious approach to generative AI stands in stark contrast to school systems like Baldwinsville Central School District (BCSD) in Central New York that embraced the technology right away. 

With the emergence of AI software last fall and winter, staff in the 6,000-student district were provided with multiple professional development sessions to explore in-classroom applications, and a team of teachers began actively testing AI—collecting data and use cases, and reporting out what they were learning. At the start of this school year, all eighth and ninth graders received introductory lessons on generative AI that explored the most popular tools, with a focus on what academic integrity looks like when using them. 

Renee Burnett, the assistant superintendent for curriculum, instruction, and assessment at BCSD, believes the district has a duty to lean in on AI and do their best to navigate thorny, complex issues in real time. At this point, she said, the genie is out of the bottle. “We have to remember that kids are going to be using this whether we want them to or not. And everybody’s going to be using this in their jobs one day,” she said. “We need to be ready.”

The contrasting attitudes of these districts are emblematic of what’s playing out across the country as district leaders grapple with rapidly evolving, era-defining technology that’s changing the way knowledge is created. Tentative steps will ultimately need to give way to what feels increasingly inevitable—the integration of AI software into school workflows—forcing the hands of reluctant school systems to figure out how to ensure that the software’s vast power can be used responsibly, ethically, and legally.

For now, though, “there are people who want to jump on board and be all in,” said Burnett, “and then there are people who are toe-dipping and waiting to see what happens to the people who went all in.” 

RISKY BUSINESS

Student cheating was one of the first concerns to emerge when it became clear that AI was capable of writing essays and solving math problems. While there’s anti-plagiarism software that claims to be able to detect text generated by AI, the technology is still in its infancy, and according to independent audits, it isn’t all that reliable. Even for districts that have since gone all-in on AI, like the 3,000-student Alliance City Schools in Ohio, dealing with cheating remains top of mind.

Rob Gress, Alliance City’s district superintendent, said he has embraced AI in part because he wants students to be prepared for future careers, but also because he believes educating his staff about AI is the best way at this point to ensure that teachers know the difference between students’ work and the output of an AI tool.

Some districts, like CPS, are attempting to get around the prospect of students using AI to write their essays or written assignments by requiring more student writing time in class. But even as they continue to create barriers to AI access in school, they’re aware that students can use their personal devices to complete assessments and assignments with tools like ChatGPT or Photomath.

Cheating is just one of many balls districts are juggling in the wake of rapid AI adoption. There’s also the thorny issue of how to protect student data and ensure that programs used by kids don’t put their personal information at risk, Robertozzi said, adding that students may “unknowingly compromise their own personal information, which could be used for nefarious reasons.” This includes unwittingly feeding their full names, images, phone numbers, addresses, or other identifying information into prompts, or uploading documents and photos into tools. 

Some of the negative consequences of this data collection by AI tools could include identity theft, fraud, and extortion, according to the U.S. Department of Education. There are also concerns that data from some of the AI companies—like the prompts kids type into the AI tools to generate solutions to problems—might be turned around and sold by companies for potentially harmful, targeted advertising to minors. 

According to the DOE, data security is already a priority of most school educational technology leaders, but “modifications and enhancements to the status quo will be required to address the new capabilities alongside the risks of AI.” 

To safeguard against these potential issues, some states, like New York, require companies like OpenAI (the creator of ChatGPT) to sign liability agreements in case student data is accidentally released or accessed.

But Lindsay Cesari, a library-media specialist and librarian for BCSD, said there is no indication this will happen any time soon. “We have not been able to get contracts signed with companies like ChatGPT, so we can’t legally allow our students to use school devices to access something like ChatGPT,” Cesari said.

Already, OpenAI has found itself the subject of numerous federal lawsuits for data breaches and copyright infringement.

In September, more than a dozen best-selling authors filed a lawsuit against the company, accusing it of infringing on their copyrights and intellectual property by using their books to train ChatGPT, The New York Times reported. The suit is one of many filed by high-profile authors in recent months that raise legally complex questions about the ownership of AI outputs and what counts as fair use of copyrighted material.

Meanwhile, Reuters reports that the company faces another class-action lawsuit for allegedly breaking privacy laws while first developing ChatGPT; software engineers claim OpenAI used stolen personal information from hundreds of millions of internet users. 

Cesari said that when it comes to district level copyright issues, “there’s still so much we don’t know.” 

“Does ethical or legal liability sit with the company that pulled copyrighted source material to train its AI or with the end user?” she said, when asked about potential copyright concerns the district has to worry about. “We just haven’t gotten to these types of questions yet.” 

Jumping In 

If most districts have responded to these unprecedented challenges by exercising extreme caution around the use of AI tools in school, or by shutting the door entirely on them until they’ve gotten a firm grip, other school systems have decided to find their answers to these big questions by integrating AI into classrooms as much as possible and figuring things out along the way. 

For BCSD, that has looked like developing staff professional development opportunities shortly after ChatGPT became available for public use.  

Cesari led these early efforts, and said that while other districts were shutting ChatGPT down, her district was committed to helping teachers experiment with AI and explore its ability to be used in the classroom to imitate a historical figure, write sample test questions, or improve students’ writing skills.

At the start of this school year, Cesari helped develop an introductory lesson to AI that every eighth- and ninth-grade student in the district received.

The lesson, delivered by social studies teachers, discussed the nuances of generative AI and various approaches to integrating it at school—from a “Techno Futurism” approach that is “all AI all the time,” to the “lock it and block it” alternative, as well as an “AI sweet spot” that attempts to find a middle ground between the extremes. The lesson also invites students to evaluate and discuss hypothetical AI scenarios in school and determine whether the usage is ethical or unethical, and explicitly teaches students how they should acknowledge and cite AI-generated work in written assignments.

The goal, Cesari said, is to teach students “how they can use this tool in really cool ways, but also use it responsibly.” To help reinforce what “responsible” means, each classroom teacher hangs a poster in the room complete with a 1 to 6 spectrum of acceptable to unacceptable AI use, where 1 is a student turning in an assignment without consulting AI, 3 is a student using AI to generate an outline or brainstorm ideas for an assignment that is still written by the student, and a 6 is a student simply pasting assignment directions into an AI tool and passing off the answer as their work.

Although the district is still working toward a board-approved policy around AI use—and working around the fact that students still can’t use popular AI tools like ChatGPT in district classrooms until data privacy issues are sorted out with New York State—the district has provided guidance to teachers (who are allowed to use AI on school devices) that aims to be flexible for educators interested in being early adopters.

The Peninsula School District in Washington State, which serves about 8,000 students, took a similar approach, developing a statement of principles and beliefs that makes clear how important AI will be as a tool to reduce the time teachers spend on routine tasks like creating unit outlines, sample word problems, assignment directions, and lesson plans, in addition to improving assessments, and potentially delivering better, more personalized student feedback.

Although Washington State has data privacy laws that AI companies must contend with, executive director of learning and innovation Kris Hagel said the district is requiring students who are 13 or younger to get a permission form filled out by parents before they’re allowed to use AI on school devices. 

To help control that use, the district also shared sample classroom policies that teachers can adopt and that encourage students to “think through and carefully write” AI prompts to get better results, and acknowledge when and how they’ve used AI in their work. The policy also cautions them to not “blindly trust” AI’s responses, Hagel said.

Gress, meanwhile, has developed a draft policy around AI for the Alliance City Schools district that he plans to put in front of the school board this year.

The working policy would allow students to use AI on school devices but would not allow them to “create, compose, generate, or edit” content they submit for a grade unless a teacher gives them permission. Students would also be prohibited from using AI to answer questions on a test, in-class quiz, or homework assignment. Violating these rules would count as plagiarism. Staff, meanwhile, would be allowed to use AI as long as the use doesn't violate student or staff privacy rights or their responsibility to keep personal student information confidential.

A Duty To Forge Ahead

Although Krestin Bahr, the superintendent of the Peninsula School District, still has some concerns about things like student data privacy as it relates to AI use, she believes it is important in this unique moment to “figure things out, while asking hard questions along the way.”

“I see my job as a superintendent to seed the ground with possibilities, to say yes, and to figure out how to remove barriers,” Bahr said.

Gress agreed, arguing that a school system whose goal is to prepare kids for life, college, and careers must figure out a way to deal with AI beyond simply blocking it.

“We know that as soon as kids graduate, they’re going to be in industries that expect them to use this technology,” he said. “So we can’t just put our heads in the sand and say, ‘We’re not going to teach you about this.’”

The work ahead, Gress said, will involve teaching students ethical and responsible usage of AI, as well as figuring out new ways to assess students. “We’ll have to figure out how to best assess the knowledge they have, assess their skills, and make sure we’re really assessing them and not some AI tool that they used.”

While all leaders agreed that there hasn’t been a ton of guidance out there on how they should do this—and the speed at which the technology is evolving makes all new AI policies feel a little provisional—Burnett, of BCSD, said school systems, especially post-pandemic, are used to being in this situation.

“This is not unusual for us, because many times school districts are asked to get from point A to point B with no real plan outside of the plan that we decide to make,” she said.

Share This Story

  • email icon

Filed Under

  • ChatGPT & Generative AI
  • Administration & Leadership
  • Education Trends

Follow Edutopia

  • facebook icon
  • twitter icon
  • instagram icon
  • youtube icon
  • Privacy Policy
  • Terms of Use
George Lucas Educational Foundation
Edutopia is an initiative of the George Lucas Educational Foundation.
Edutopia®, the EDU Logo™ and Lucas Education Research Logo® are trademarks or registered trademarks of the George Lucas Educational Foundation in the U.S. and other countries.