September 2023
TESOL HOME Convention Jobs Book Store TESOL Community
ARTICLES
GENERATION GPT: NURTURING RESPONSIBLE AI USAGE IN COLLEGE CURRICULA

D. Ellery Boatright, Georgetown University Law Center, Washington, DC, USA

In late 2022, evidence of a new form of perceived academic dishonesty began appearing in college lecture halls, and some faculty quickly took notice. Student work might have appeared conspicuously flat in tone, or demonstrated a subject matter familiarity inconsistent with the student’s prior work. Some faculty escalated the matter to a confrontation with the student; and, in one case at Northern Michigan University in January 2023, the student confessed to writing their paper using ChatGPT, an Artificial-Intelligence (AI) chatbot that can provide instant and authentic answers to nearly any prompt (Huang, 2023).

Universities now face difficult decisions as they adjust their policies and curricula to account for the emergence of tools like ChatGPT, but it is crucial that faculty and administrators consider that students can only follow expectations which are clearly delivered to them; and policy should, therefore, be instated proactively. This article briefly outlines the emergence and technological background of these tools, and cites examples of efforts towards the prevention and circumvention of their use. The article then recommends a faculty approach that is designed to accommodate the responsible use of these tools.

ChatGPT: A Generative AI Chatbot

ChatGPT is a commercial iteration of an AI chatbot, developed and maintained by a research group called OpenAI. ChatGPT uses a Generative Pre-trained Transformer (GPT) language model to receive prompts from users and supply appropriate answers (Lock, 2022).Users interact with ChatGPT through a website or mobile application, prompting it with questions that may range in scope and complexity from “What is the capital city of France?” to “My flight was cancelled with no notice, which made me miss an important business function. Can you draft a formal complaint, provide examples of previous legal cases involving this subject, and refer me to a local lawyer who has handled these types of cases?

Questions can be asked in plain English, and answers are returned in a matter of seconds. Questions can be iterated on in the same conversation, and users can provide further clarifying prompts (e.g., “Was Paris always the capital of France?” or “Just show me previously settled legal cases on this subject from the states of Texas or Arkansas.”).

Most critically to educational contexts, ChatGPT will also readily supply a student with an apparently credible academic product on any subject matter, unique from any existing writings. Conventional means of detecting potential academic dishonesty – namely matching products to existing sources – are easily thwarted.

Internet chatbots were something of a novelty for decades, only supplying canned responses in reply to a certain keyword, but Generative AI tools such as ChatGPT are significantly more sophisticated. GPT models are trained on a massive volume of existing data – essentially, they are taught by rote how humans communicate – and then use this training to provide organic replies to users. The designation of “Artificial Intelligence” may seem premature given that these models do not possess the capability to think or act for themselves, but chatbots have now certainly grown well beyond their original scope; and college faculty now find themselves asking not if they should reconsider their curriculum in light of these tools, but when and how.

Faculty Approaches to Generative AI Tools

There are two threads of thought regarding the appropriate and ethical response to the proliferation of these tools in higher education. The first is preventative: answering the questions, “How do we prevent students from using these tools?” and “How can we know if they do?

Preventive Measures

In an effort to address the question of preventing students from using these tools, faculty can revise their approach to assignments. For example, the faculty member from Northern Michigan University enacted several revisions: students moved away from take-home assignments, would be required in the future to write their first drafts in class under the supervision of proctors, and would have to explain subsequent revisions (Huang, 2023).

Beyond preventing students from using these tools, how can faculty know if students have used them? Criteria that might be used to determine if a product has been produced by generative AI are somewhat vague and intangible. For example, considerations such as a paper displaying a conspicuously flat tone or demonstrating an inconsistent familiarity with the subject matter are unquantifiable and unreliable. Outlining the specific traits that can indicate that a student has produced a product using ChatGPT may be impossible because (1) ChatGPT was conceived and fine-tuned to communicate in a convincingly human manner, and (2) its viability relies on its ability to do so.

Furthermore, the training process for these language models is often specifically geared towards forcing the model to iterate its answers repeatedly until it can no longer detect that it is reviewing its own output, which inherently hamstrings efforts to develop software to detect ChatGPT. In short, the only product capable of detecting ChatGPT is likely to be ChatGPT itself; and in that regard, it is still woefully ill-equipped. This shortcoming resulted in, in one instance, a class of graduating seniors having their degrees wrongfully withheld after ChatGPT supplied a confident but incorrect answer to a faculty member’s concern that their students used ChatGPT to write their final papers (Verma, 2023).

Several tools are available that attempt to detect the use of AI chatbots, but their efficacy is still in the nascent stages. One such tool is GPTZero, which purports to accurately detect the presence of AI-generated content (Bowman, 2023). TurnItIn, a popular plagiarism detection platform, has introduced ChatGPT detection in an early stage (Fowler, 2023).

In at least one instance, however, verified authentic student submissions have been wrongly flagged as having been generated by AI by TurnItIn’s current detection toolset. It is unlikely that AI chatbots in their current form are the last development of the technology that university policies will eventually need to contend with. Given the rapid progression of technology, faculty should consider whether it is a productive use of administrative time to attempt the circumvention of something so widely available.

Pedagogical Shifts in Light of Generative AI

The second thread of thought is more pedagogical: rather than asking how ChatGPT usage can be prevented and detected, faculty and administrators elect to consider how their curriculum can shift to acknowledge the inevitability that students may use these tools.

On a practical level, this can involve simple measures that can be implemented quickly, namely carefully crafting the prompt for a classroom assignment. Since the ChatGPT-3.5 model is only trained on sources available as of November 2021 (Ortiz, 2023), prompts that relate to more recent events will produce obviously incorrect answers. If ChatGPT-3.5 is prompted to provide the names of currently elected politicians or the leading businesses within a certain industry, its replies at this point will be more than one year out of date.

In addition to revising examinations to focus on post-November 2021 events, other approaches may be more holistic. Some faculty are opting to retire written exams in lieu of a more traditional oral examination, or asking students to evaluate an example of ChatGPT output for its accuracy. While it may seem counterintuitive to interrogate an example of ChatGPT output for accuracy, unearned confidence is a potential critical fault of ChatGPT’s capacity for answering more nuanced prompts.

Salient to this point is the situation in which a lawyer representing a plaintiff in an aviation lawsuit used ChatGPT to prepare crucial court documents, including references to legal precedents set by cases which turned out to be entirely fictional (Cerullo, 2023). In this case, the user even asked several clarifying questions, asking ChatGPT to verify that these legal cases existed, to which ChatGPT replied affirmatively, even specifying that it had located the content using the Westlaw and LexisNexis legal databases. The user admitted to being unaware that ChatGPT could provide false information (Cerullo, 2023).

Armed with knowledge that ChatGPT might produce inaccuracies, faculty can revise their prompts and activities to encourage students not only to employ generative AI tools in a way that does not rely on the technology to source information, but also to review products with a critical eye.

Conclusion

An openness towards shifting pedagogy (rather than the continuous deployment of preventative measures) lends itself to a less adversarial view of students, while also providing students with an opportunity to integrate an emerging technology into their study without having to deliberately avoid a tool that could potentially improve their output. Taking a holistic pedagogical stance rather than an adversarial and preventative one may likely result in students that are better equipped to function professionally using all the legitimate tools at their disposal, rather than students who could be prone to be overly-trusting of a perceived shortcut, having been denied an opportunity to learn to integrate the tool into their work. What exactly these approaches might look like will depend on a number of factors unique to each institution. An effective first step might be collaborating with colleagues to collectively draft an approach rather than accept an existing solution.

References

Bowman, E. (2023, January 9). A college student created an app that can tell whether AI wrote an essay. NPR. https://www.npr.org/2023/01/09/1147549845/gptzero-ai-chatgpt-edward-tian-plagiarism

Cerullo, M. (2023, May 29). A lawyer used ChatGPT to prepare a court filing. It went horribly awry. CBS News. https://www.cbsnews.com/news/lawyer-chatgpt-court-filing-avianca/

Fowler, G. (2023, April 3). We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Huang, C. (2023, January 16). Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach. The New York Times. https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html

Lock, S. (2022, December 5). What is AI chatbot phenomenon ChatGPT and could it replace humans? The Guardian. https://www.theguardian.com/technology/2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans

Ortiz, S. (2023, June 26). What is ChatGPT and why does it matter? Here's what you need to know. ZDNet. https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/

Verma, P. (2023, May 18). A professor accused his class of using ChatGPT, putting diplomas in jeopardy. The Washington Post. https://www.washingtonpost.com/technology/2023/05/18/texas-professor-threatened-fail-class-chatgpt-cheating/


Dr. D. Ellery Boatright (she/her) leads the department of Instructional and Academic Technologies at Georgetown Law, where her classroom experience and technical expertise inform a compassionate view of pedagogy.
« Previous Newsletter Home Print Article Next »
In This Issue
LEADERSHIP UPDATES
ARTICLES
ABOUT THIS COMMUNITY
Tools
Search Back Issues
Forward to a Friend
Print Issue
RSS Feed