Last winter, the unveiling of OpenAI’s alarmingly sophisticated chatbot sent educators into a tailspin. Generative AI, it was feared, would enable rampant cheating and plagiarism, and even make high school English obsolete. Universities debated updating plagiarism policies. Some school districts outright banned ChatGPT from their networks. Now, a new school year presents new challenges—and, for some, new opportunities.
Nearly a year into the generative AI hype, early alarm among educators has given way to pragmatism. Many students have clued into the technology’s tendency to “hallucinate,” or fabricate information. David Banks, the chancellor of New York City Public Schools, wrote that the district was now “determined to embrace” generative AI—despite having banned it from school networks last year. Many teachers are now focusing on assignments that require critical thinking, using AI to spark new conversations in the classroom, and becoming wary of tools that claim to be able to catch AI cheats.
Institutions and educators now also find themselves in the uneasy position of not just grappling with a technology that they didn’t ask for, but also reckoning with something that could radically reshape their jobs and the world in which their students will grow up.
Lisa Parry, a K–12 school principal and AP English Language and Composition teacher in rural Arlington, South Dakota, says she’s “cautiously embracing” generative AI this school year. She’s still worried about how ChatGPT, which is not blocked on school networks, might enable cheating. But she also points out that plagiarism has always been a concern for teachers, which is why, each year, she has her students write their first few assignments in class so she can get a sense of their abilities.
This year, Parry plans to have her English students use ChatGPT as “a search engine on steroids” to help brainstorm essay topics. “ChatGPT has great power to do good, and it has power to undermine what we’re trying to do here academically,” she says. “But I don’t want to throw the baby out with the bathwater.”
Parry’s thinking is in line with an idea that ChatGPT might do for writing and research what a calculator did for math: aid students in the most tedious portions of work, and allow them to achieve more. But educators are also grappling with the technology before anyone really understands which jobs or tasks it may automate—or before there’s consensus on how it might best be used. “We are taught different technologies as they emerge,” says Lalitha Vasudevan, a professor of technology and education at Teachers College at Columbia University. “But we actually have no idea how they’re going to play out.”
The race to weed out cheaters—generative AI or not—continues. Turnitin, the popular plagiarism checker, has developed an AI detection tool that highlights which portions of a piece of writing may have been generated by AI. Between April and July, Turnitin reviewed more than 65 million submissions, and found that 10.3 percent of those submissions contained AI writing in potentially more than 20 percent of their work, with about 3.3 percent of submissions being flagged as potentially 80 percent AI-generated. But such systems are not foolproof: Turnitin says there’s about a 4 percent false positive rate on its detector in determining whether a sentence was written by AI.
Because of those false positives, Turnitin also recommends educators have conversations with students rather than failing them or accusing them of cheating. “It’s just supposed to be information for the educator to decide what they want to do with it,” says Annie Chechitelli, Turnitin’s chief product officer. “It is not perfect.”