The New York Times published a piece regarding university policies and strategies for students using ChatGPT or similar tools to prepare coursework. For each strategy, I’ll assess how realistic it is and what the consequences might be.
Articles like this one, as well as the policies they describe, are shortsighted, because AI text generation is continually improving. Putting on my analyst hat, I believe the following:
- ChatGPT has flaws, but some of them will soon be remedied. Expect versions in the next 12 months that use up-to-date web information, not just a static corpus of data from 2021, and can cite sources with links. The same applies to limits on ChatGPT due to capacity and demand; they’ll solve this problem, especially if they can charge for it. Any teaching strategy that depends on these flaws will soon be ineffective.
- ChatGPT competitors and variants will arise. Some variants will inevitably be better at some applications, such as specialized scientific knowledge or writing business strategy documents. I also expect ChatGPT or competitors to expand capability through integration with other tools, such as search engines, language translation tools, audio comprehension and feedback, and AI drawing tools. Consider ChatGPT text generation as you would a capability like spelling and grammar check: a feature that many applications will include moving forward.
- AI text detectors will be of limited use. A Princeton student created a tool that detects computer-generated text by assessing “perplexity” and “burstiness.” AI developers (if not ChatGPT, then competitors) will continually improve their products to evade such tests. For example, the burstiness test depends on ChatGPT’s relative uniformity of sentence length, but obviously an AI developer could program AI text generation to intentionally vary sentence length. The detectors will continue to get better, but whatever method they use, AI text generator vendors will quickly evade. Any quality of AI text that computerized tools can detect, computerized tools can evade.
- Some ChatGPT limitations will remain. I don’t think AI will develop a sense of humor. It will not easily be able to tell facts from lies. It will continue to combine ideas from existing source material online, but will not come up with new ideas. These problems seem qualitatively harder than, say, varying sentence length or working from a corpus of up-to-date information.
Analyzing academics’ suggested strategies for dealing with generative AI
Here’s what the Times article said about academics’ strategies and my analysis of whether they will work.
Some public school systems, including in New York City and Seattle, have since banned the tool on school Wi-Fi networks and devices to prevent cheating, though students can easily find workarounds to access ChatGPT.
I expect ChatGPT and similar tools to surface in any number of applications from an unlimited number of other supplies. No blacklist will be able to keep up.
At many universities, ChatGPT has now vaulted to the top of the agenda. Administrators are establishing task forces and hosting universitywide discussions to respond to the tool, with much of the guidance being to adapt to the technology.
Hmm. Which will go faster: generative AI text tools or academic committees of professors and administrators developing policies? If you can’t answer that, you’ve never been on an academic committee.
At schools including George Washington University in Washington, D.C., Rutgers University in New Brunswick, N.J., and Appalachian State University in Boone, N.C., professors are phasing out take-home, open-book assignments — which became a dominant method of assessment in the pandemic but now seem vulnerable to chatbots. They are instead opting for in-class assignments, handwritten papers, group work and oral exams.
Take-home, open-book assignments are closer to how students will have to work in the real world. Phasing them out is a serious step back in effective pedagogy, depriving students of an experience they will need as future workers and entrepreneurs. In-class assignments are fine, but penalize students with learning disabilities who process information more slowly. Requiring handwriting prioritizes an arcane skill that’s been obsolete for decades, and one that ignores students’ need to learn to write and revise their work. Let’s be real: students who can generate text with AI can copy it in handwriting. Group work and oral exams are effective techniques, but don’t substitute for requiring students to develop the skill to write alone.
Gone are prompts like “write five pages about this or that.” Some professors are instead crafting questions that they hope will be too clever for chatbots and asking students to write about their own lives and current events.
This is a promising direction. Assignments that require original, creative thinking will prod students to develop thinking skills. Writing about current events may be an effective technique now, but generative AI will catch up to it within six months.
Frederick Luis Aldama, the humanities chair at the University of Texas at Austin, said he planned to teach newer or more niche texts that ChatGPT might have less information about, such as William Shakespeare’s early sonnets instead of “A Midsummer Night’s Dream.”
The chatbot may motivate “people who lean into canonical, primary texts to actually reach beyond their comfort zones for things that are not online,” he said.
This might work, although teaching obscure material just to evade an AI tool seems like twisting the curriculum in irrelevant directions just to make grading easier.
In case the changes fall short of preventing plagiarism, Mr. Aldama and other professors said they planned to institute stricter standards for what they expect from students and how they grade. It is now not enough for an essay to have just a thesis, introduction, supporting paragraphs and a conclusion.
“We need to up our game,” Mr. Aldama said. “The imagination, creativity and innovation of analysis that we usually deem an A paper needs to be trickling down into the B-range papers.”
Requiring imagination, creativity, and innovation of analysis? Excellent idea. But a paper devoid of original ideas should be an F, not a B-minus.
Universities are also aiming to educate students about the new A.I. tools. The University at Buffalo in New York and Furman University in Greenville, S.C., said they planned to embed a discussion of A.I. tools into required courses that teach entering or freshman students about concepts such as academic integrity.
“We have to add a scenario about this, so students can see a concrete example,” said Kelly Ahuna, who directs the academic integrity office at the University at Buffalo. “We want to prevent things from happening instead of catch them when they happen.”
It’s also a good idea to teach students about AI tools (although many will know more than their teachers do). Let’s go further and teach students about the strengths and weaknesses of such tools and how to use them effectively.
Other universities are trying to draw boundaries for A.I. Washington University in St. Louis and the University of Vermont in Burlington are drafting revisions to their academic integrity policies so their plagiarism definitions include generative A.I.
John Dyer, vice president for enrollment services and educational technologies at Dallas Theological Seminary, said the language in his seminary’s honor code felt “a little archaic anyway.” He plans to update its plagiarism definition to include: “using text written by a generation system as one’s own (e.g., entering a prompt into an artificial intelligence tool and using the output in a paper).”
That’s clearly necessary. But redefining cheating doesn’t necessarily stop cheating. Dishonest students are already paying people to write papers for them (and I can virtually guarantee that the services that provide such papers will attempt to add comments to this post linking to their websites).
The misuse of A.I. tools will most likely not end, so some professors and universities said they planned to use detectors to root out that activity. The plagiarism detection service Turnitin said it would incorporate more features for identifying A.I., including ChatGPT, this year.
More than 6,000 teachers from Harvard University, Yale University, the University of Rhode Island and others have also signed up to use GPTZero, a program that promises to quickly detect A.I.-generated text, said Edward Tian, its creator and a senior at Princeton University.
The threat of detection will be more effective than actual detection. In this arms race, I’d bet on the creation tools to outrun the detectors.
Some students see value in embracing A.I. tools to learn. Lizzie Shackney, 27, a student at the University of Pennsylvania’s law school and design school, has started using ChatGPT to brainstorm for papers and debug coding problem sets.
“There are disciplines that want you to share and don’t want you to spin your wheels,” she said, describing her computer science and statistics classes. “The place where my brain is useful is understanding what the code means.”
Lizzie’s the smartest person in this article. The rest of academia should learn from her example and integrate ChatGPT into their coursework.
One video shows a student copying a multiple choice exam and pasting it into the tool with the caption saying: “I don’t know about y’all but ima just have Chat GPT take my finals. Have fun studying.”
If your exam is this easy to game, it’s not a very good exam.
Generative AI will eventually define the future of education
There will be a year or two of cat-and-mouse between academics and AI text generation tools.
In the meantime, some professors will develop techniques for teaching that integrate such tools.
More inventive professors will find ways to require creativity in their assignments. That will be the way forward. And it will be painful.
Whole universities will differentiate based on their attitude about technology: embrace and incorporate, or defend and prevent. The former will succeed. The latter will slowly and steadily become obsolete.