Two Perspectives on ChatGPT

May 5, 2023

By Cynthia Rutz, Director of Faculty Development, CITAL

This past winter there was a veritable explosion of news stories about ChatGPT, a technology that lets users put in prompts to generate text that reads like it was written by a human. Some decried it as the end of the college essay, others saw it as an aid to help struggling writers. This semester, some of your fellow faculty have participated in a Faculty Learning Community (FLC) on the topic of “AI and Writing Pedagogy.” Below, two members of that FLC give their thoughts about the impact of ChatGPT on our teaching. Salena Anderson (English) addresses some of the ethical issues with AI and Martin Buinicki (English) talks about how he has modified his teaching and syllabus to address AI issues.

Salena Anderson: Ethical Issues with ChatGPT

Due to her background in linguistics, Salena has been following the news stories about ChatGBT with interest.  She had played with the technology–the free version– to test its limits by setting it various writing tasks. She has also input some of her own writing prompts to see how the program handles them. She suggests that we all spend some time with ChatGPT so that we, too, can begin to recognize how it works. 

Salena cautions faculty about putting any student writing into the program. To do so would be to violate the student’s intellectual property rights. Moreover, the program could use the student’s writing for its own purposes.  Instead, she  suggests that if you wish to check whether a piece of writing was generated by ChatGPT, you should  reverse-engineer it.  Ask yourself what kind of prompt would produce this essay, then put that into ChatGPT.  You can then check the student essay against what ChatGPT produces to see if there are words and ideas that turn up in both.

Salena notes that ChatGPT does a better job on topics or texts that have been written about a lot because it is drawing from sources within its corpus of actual human writing. However, it struggles with providing citations and sources. In fact, ChatGPT will actually provide “fake” citations.  The insidious thing is that it might cite real authors and real journals but provide a fake article title.  

Salena views ChatGPT’s lack of transparency about sources and citations as an ethical dilemma.  She teaches her students that attribution and citation are the bedrock of academic discourse: we must always give credit where credit is due. But when people copy from ChatGPT they often do not even know whose ideas they are taking, making it difficult to cite sources.  She appreciates an observation from her former graduate advisor, AI researcher Michael Covington: “If it’s wrong to do something without a computer (or AI), then it is still wrong to do it with a computer (or AI), even if that makes it much easier to do.”

I asked Salena how she will respond to the potential for AI-assisted writing in her classroom. In her classes she finds it important to assess not just the final essay, but also the writing process. So she will begin to include at least one writing assignment that must be handwritten in class. She plans to integrate responsiveness to peer ideas and in-class discussions into more writing assignments. This strategy both elevates engagement with peer ideas in intellectual inquiry and complicates any attempted use of ChatGPT as it has no data regarding specific class discussions from which to draw.

She asks students to reflect on their writing and revisions as well. But Salena notes that ChatGPT can also create terrible first drafts on demand. And it can then write a better draft and provide a “reflection” on the differences. So you can no longer assume that just because  an essay has gone through several drafts that means ChatGPT was not used. 

Finally, we discussed the possible upside of ChatGPT. It could certainly uplift in-class discussion, peer review,  revision plans, and incorporating ideas from peers. However, if you currently only ask students to hand in a final essay, that will probably need to change. Salena does not suggest designing prompts or activities merely to avoid ChatGPT.  Instead, look at your learning objectives and figure out how you can assess them in ways that will engage your students in the writing process.

NOTE: Salena has a forthcoming article on this topic: “‘Places to Stand’: Multiple Metaphors for Framing ChatGPT’s Corpus” in Computers and Composition.

Martin Buinicki: Adding Friction to the Writing Process

 Like most of us, Martin began to read the explosion of coverage on ChatGPT last December. He remembers particularly vividly this Atlantic piece with the provocative title The College Essay is Dead.   More recently, he cited New York Times columnist Thomas Friedman who calls the latest version of ChatGPT Our New Promethean Moment.  Like Salena, Martin immediately began playing with the program. It made him laugh when it generated, at his prompting, a student recommendation letter in the voice of a super villain. The fact that an automated text could elicit such a human response unnerved him greatly. He also gave it some of his own writing prompts and was stunned to see it create a credible essay in ten seconds.

In response to what he had learned, Martin began to include some new language in his syllabus about the use of AI. Here is the statement he developed: 

On Plagiarism and Academic Integrity: In the broadest sense, plagiarism is the passing off of someone else’s writing or ideas as your own. With this in mind, plagiarism now includes presenting “AI” or computer-generated text as your own work. Writing is more than a matter of the final product–a “piece” of writing. Writing is a process–of discovery, creation, analysis, and synthesis–that empowers us to think clearly and creatively. As it advances, AI will likely be part of that process, but it must not replace the work we do. For now, use of AI or other generative software when not explicitly allowed as part of the assignment, or at any time without citation, will be considered a violation of the Honor Code.

Martin also began experimenting with some 20-minute in-class writing time in response to a text.  He may add a hand-written first draft as a way to add “friction” to the process, allowing your thoughts to evolve slowly.  In the future, Martin might use a product such as Rocketbook  as a way for his students to easily turn their handwritten notes into text. This idea of the positive effect of friction is expressed in the recent book Futureproof: 9 Rules for Humans in the Age of Automation by Kevin Roose.   

Martin used some chapters from the Roose book for his “ENGL 180: Gateways to Interpretation” class this semester.  The class discussed Roose’s contention that, since AI will replace many tasks, if students want to be “futureproof” they need to work on becoming “surprising, social, and scarce.”  In other words, their liberal arts education will help students to think on their feet and to find unexpected solutions (surprising), to work with people from different fields (social), and to mix elements from different disciplines (scarce).

At the college-level, Martin decided that Valpo really needed to get ahead of this technology. So in January he convened a faculty learning community (FLC) on the topic of “AI and Writing Pedagogy.” The group read and discussed  the Roose book as well as several other articles on AI. They also covered such topics as how AI will affect the employability of Valpo graduates, the effect of AI interactions on the mental health of vulnerable students, and also the ethical issues raised by AI.  Martin thinks that the next step for Valpo should be the creation of a university-level task force on AI. 

Posted in