Universities like Georgetown are attempting to detect cheating with ChatGPT. They will fail.
ChatGPT is a crisis in the making for professors who teach writing. Their administrations are attempting to cope. Their pathetic efforts will fail.
Here’s what Georgetown is telling professors — and why it won’t work
Here’s the text of a message from Georgetown University to professors there (posted by a professional friend of mine who received it). The quoted text is from that email, the text that follows it is my opinion.
Dear Colleagues,
We are writing to address concerns that have been voiced about the latest chatbot technology and its potential to undermine our assessments of student writing.
OpenAI, a company attempting to produce what is called artificial general intelligence, has released programs that make it fast and easy to generate texts of various genres and lengths. ChatGPT is the latest and most powerful. In seconds it will generate a plausible seeming essay on virtually any topic one might wish. Students are already using this technology to write their essays.
While this is a serious worry, there are things that can be done to mitigate it; they address the product as well as the process.
Notice how this is worded. The “serious worry” is about grading, not about student learning.
1. These programs are very good at producing readable English, and that is a challenge. They are less good, for now, at using concepts in convincing ways. By all reports that we have so far received, their analytical skills are so poor that anyone who knows that material at all can see that the papers are hopelessly confused. That’s good news for folks teaching advanced classes, but not so much for those teaching beginners.
If the objective is to produce readable English, and ChatGPT produces readable English, what is the challenge? The challenge is grading.
More crucially, it’s true that the analytical skills are poor — but not as poor as this makes it sound. I’d rate ChatGPT as creating work that’s at least a C in an introductory course. And it will get better.
2. We understand that the ChatGPT, and other chatbots, typically avoid taking a stand, and produce papers that are more “on the one hand, on the other hand”. So assignments that require a clear thesis that is argued for and defended over their course are less liable to spoofing. Again good news for those whose assignments are of that sort, less good for those who are interested in exploratory essays.
Hopelessly naive. It is true that by default, ChatGPT tends to show both sides. But already, you can convince it to show only one side with the right prompt. And within months, someone will code a version of this technology that takes any stand you want.
3. Apparently these programs are not good at citing sources. And so requiring that students refer to, quote, and appropriately cite sources is another helpful tool in ensuring that papers are written by humans rather than bots.
True. For the next few weeks, that is. Adding sources is far easier than creating a AI that can write — expect this problem to get solved in short order. In any case, even now, finding a source is as easy as googling.
4. There is, finally, a demo of a bot-detector that is, for now, freely accessible. We are looking in to what is necessary to ensure that we have ongoing access to the technology, but the demo can be accessed here. Of course this is only useful if there is already reason to suspect that a paper was artificially generated.
Also a short-term fix. Whatever bot-detectors detect, the next competing AI will elude. There is no “bot fingerprint.”
Ok, that addresses detecting the products of ChatGPT after they are produced. Here is a suggestion for a process that will discourage students from attempting to use this in the first place.
5. We suspect that the following will not work forever, but for now, there is a pretty straightforward option: Require students to use editing programs that can record their entire editing history, such as GoogleDocs, and require them to record it. If you can follow the generation of the paper, it is much less likely that it could have been generated by a bot. The rare student that drafts using pen and paper will also have a record of the paper writing process.
Forcing students to write using the tools you require is a mistake. People should write using the tools they find most helpful. If they want to write in Word, let them write in Word. If they want to write in Notepad, let them write in Notepad.
If this happens, expect a Google Docs extension that takes text from another doc and types it into another one character at a time, with convincing looking deletions, insertions, and the like. I bet a talented undergraduate could code that in an afternoon.
And Lord save the poor teaching assistant whose job is to watch students type papers.
What a terrible idea.
Good luck, and if you suspect dishonest work, report it to your school’s honor council, so they can help sort out the matter.
University Honor Council
Some students handing in work in December of 2022 will get caught by these methods. By 2023, none of these techniques will work at all.
Embrace AI writers or die
There is only one solution here.
Teach students how to use AI to write, what it’s weak at, and how to improve it.
Outlawing tools is dumb. We are not teaching how to write without AI. We are teaching how to think and write with all available tools.
It’s a tough lesson. But all educational institutions are about to learn it in the next few months.
Schools could evaluate learning by assigning pairs of students to discuss a topic while an instructor or teaching assistant listens. Make it open-book if necessary, but take away the expectation of writing skill and focus on comprehension of the subject. Then the staff can evaluate writing assignments in the context of what knowledge the students have shown. This will reward honest scholarship and remove incentives to practice bullshit. Think of the time instructors will save being able to provide feedback immediately instead of staying up night after night searching for evidence of cheating. If the effect of ChatGPT is to force educational institutions to reassess how they measure learning, it will be the first chatbot ever to benefit humanity.
This issue really resonates with me as an ex-college professor. Of all of the suggestions that I’ve heard so far, the best is yours: Make students show how they improved what “CheatGPT” generates.
This high school English teacher has an interesting take. The gist is a challenge to inspire the motivation to write, alongside the skills required to do it well.
ChatGPT may doom high school English classes like mine. Maybe that’s not so bad: https://www.wbur.org/cognoscenti/2022/12/22/chatgpt-high-school-english-class-ai-ben-berman
Great story!
“…make sure that I am providing students with authentic learning experiences, to focus on teaching what is meaningful rather than what is easily measurable.” Hear, hear!
The sad part is that Georgetown used ChatGPT to write its letter…
In one interview, one of the OpenAI execs stated that they plan to embed secret patterns in ChatGPT’s responses, so that writing submitted to their checker could spot those patterns. How functionable that would be long-term, and how it would work from a cost standpoint is unclear.
Even if that happens, others will create tools as good as or better than OpenAI’s. No one company will control this technology.