In part one of this article, we laid out the basics of ChatGPT, the new AI technology that can mimic writing in many formats, including analytical essays, presentations, and even Shakespearean sonnets. The release of ChatGPT has been causing celebration among students and alarm among teachers because of its ability to facilitate plagiarism. Some educators have been quick to adopt technical solutions to this problem, which we touched on in the first half of this series, but others are thinking more critically about how ChatGPT can be incorporated into their syllabi with positive results. And still other teachers wonder whether ChatGPT is a problem in its own right or the symptom of deeper issues in education today.
Some educators have embraced a return to old-fashioned, in person essays after years of take-home exams. In an article in the New York Times, educators revealed plans to change academic integrity statements to ban the tech, increase the use of oral presentations, and require students to write first drafts by hand. Notably, some universities, like Oxford and Cambridge, never departed from this low-tech model. Students’ entire grades for the year are determined by a grueling series of in-person essays and exams taken at the end of the year.
Few students or educators wish to go back to the era of quill and ink, instead preferring to come up with ways to incorporate the new technology where possible and confident in their abilities to detect its use when it is not allowed. ChatGPT may produce plausible answers that could earn middling grades, but its responses often lack evidence of higher order thinking. In an informal poll of five educators we spoke with, each claimed that they didn’t need software to detect ChatGPT because they were familiar with their students’ abilities and voices.
“ChatGPT will never replicate the charm and syntax of an undergraduate paper where the student has neither been to class nor read more than the abstract of a paper,” wrote columnist Vicky Mochama in a satirical tweet that has gone viral among educators. “There’s a magic nothingness that they’re able to conjure with sheer grit and desperation that a.i. could never,” she added.
Andrew Moroni, an English teacher at an independent high school in Manhattan, recently appeared in a video for Wired in which he graded a series of increasingly complex assignments turned in by ChatGPT, with surprising results. Essays were often perfunctory and poorly argued, he said, but he found a Shakespearean sonnet about Taco Bell to be extremely compelling.
“I also wrote a Shakespearean sonnet about Taco Bell once,” he admitted, “but I think this one is better.”
Still, Moroni came away from the experience feeling that ChatGPT is proficient at some tasks, but still has many weaknesses in areas of internal cohesion, use of evidence, and making connections between thesis and body paragraphs.
However, Moroni sees these shortcomings as benefits to the technology, as many of his students are reluctant to revise their work–in part because it feels like an innate expression of themselves.
“One thing that is very useful about ChatGPT is its potential to be used as a tool for teaching students how to revise,” he says.
In a recent lesson on Kafka’s Metamorphosis, Moroni projected a reading response to the assigned section of the text and asked students how it could be improved. Many, he reported, were more willing to rework this AI-generated material than if they had written it themselves.
Likewise, in the field of computer science, where AI has been predicted to replace human coders, engineers and instructors are using ChatGPT to check their code against the machine’s. As Chris Stokel-Walker writes in a recent article for Lead Dev, developers shouldn’t yet depend on AI to write code for them, but they can use it to answer certain technical queries or act as a sound-board that can help eliminate wrong answers.
To combat the inappropriate use of ChatGPT, some educators argue that schools must confront why students are tempted to cheat in the first place by making clear the differences between what they write and what ChatGPT spits out.
“In my writing classes, we learn how to locate, evaluate, and understand sources of good information about any topic,” writes Jordan S. Carroll, assistant professor of English at the University of Puget Sound, in an article for The Nation. “Just as importantly,” he argues, “students come to see scholarly inquiry as a collective project dedicated to improving our shared understanding of the world.” For Carroll, if students recognize that learning is a shared endeavor toward building collective knowledge rather than a piece of busy work, they are much less likely to cheat, regardless of the means.
By the same token, educators should also provide tutoring and other academic support to students who are struggling, argues Rebecca Awdry in a recent column for The Guardian about the rise of outsourcing writing assignments. Increased support helps students feel more comfortable asking for help when needed and less likely to resort to cheating. And just as they did before AI, schools must provide students with clear consequences for cheating to deter the use of AI or any other kind of plagiarism. This will help them understand the seriousness of cheating and the negative impact it can have on their academic and professional lives.
For some educators, ChatGPT isn’t the problem so much as the symptom of how instrumentalized education has become in the 21st century. If students believe that writing (or completing any kind of assignment) is just a task they need to do in order to get a grade, and ultimately a job, then they will be more tempted to cheat. As author Malcolm Harris argues in Kids These Days, college students are increasingly seeing their degrees as their only tickets to well-paying careers. Harris believes this way of thinking is a product of increasing financial pressure (the huge debt many students take on, the extra jobs they work while attending school), as well as the ideology of many university administrations, who see students as customers and have lost sight of the notion that education is a public good.
ChatGPT-3 is not the first piece of tech to upend the way teachers assess writing, nor will it be the last. Already, Open AI is developing a new, improved version of the software, ChatGPT-4. Teachers are unlikely to win in an arms race against a technology funded by Elon Musk. Instead, universities and high schools need to create an environment that fosters not just academic integrity, but a genuine belief in the value of education as an end in itself.
By Brad Hoffman and Faya Hoffman, Board Certified Educational Planners, in collaboration with My Learning Springboard faculty members
Leave a Reply