Countless legitimate concerns surround ChatGPT and other generative Artificial Intelligence (AI) services in connection with education and with law, including accuracy, misinformation, plagiarism, environmental impact, exploitation of us as users, atrophy of our collective writing ability, and destruction of people’s jobs, not to mention the potential for ushering in a dystopian world a la The Terminator or various other sci-fi efforts to grapple with the subject (e.g., Caplan, 2023a, b, c).
And yet, with all of these concerns in my mind, I still found it too easy and appealing to not use ChatGPT to help create content for an online legal English course focused on building vocabulary knowledge. The way I’ve been using it fits well into the historical framework described by MIT labor economist David Autor (2023), in which he suggests that generative AI, with its ability to quickly perform rote writing tasks, may destroy the livelihoods of some. But it will not do so for professionals such as lawyers and doctors. It will just enable them to do their work faster so they can focus on the substance of their expertise.
I hope my experiment and experience will help other teachers who want to find ways to create curriculum more quickly and creatively.
My ChatGPT Experiment Story
My goal was to create a self-guided, online legal English course focused on the vocabulary of tax law for one of Georgetown University Law Center’s LLM programs.
This article outlines three of the major steps I took to design self-study modules and discusses the ways in which I used ChatGPT for each step in my curriculum design. These steps are (1) creating tax-relevant sample sentences with the target vocabulary in order to provide multiple opportunities for contextualized exposure to target vocabulary terms, (2) generating sample sentences that demonstrate notable vocabulary knowledge beyond the definition of the target word (e.g., collocations), and (3) generating practice and assessment activities for each term.
Step 1: Could ChatGPT Quickly Create Sample Sentences with the Target Vocabulary Item?
Yes! Super easy. Well, sort of.
It was easy enough for ChatGPT to create sample sentences for a term like “adjusted gross income,” which really only has a Tax English meaning and no Plain English meaning.
But, for example, when I asked ChatGPT to give me ten sample sentences using “realize” in a tax-related way, it generated ten sentences related to tax, but I realized (pun intended) that some of the sentences used “realize” in a Plain English way (e.g., “The taxpayer failed to realize the potential tax consequences of….”) rather than the Tax English way I was looking for (e.g., “to realize capital gains.”) This led me to ask for ten more sample sentences (see Figure 1). Drawing from the first and second requests, I was able to glean the desired number of sentences from ChatGPT’s output.
Figure 1. Prompt and Output from ChatGPT to Create Sentences

Step 2: Could ChatGPT Identify Important Vocabulary Knowledge Beyond Definitions?
Conveniently, using ChatGPT to generate sample sentences with target vocabulary also helped me with the process of identifying important vocabulary knowledge for learners.
In the case of “realize” (see Step 1), I turned a problem into an opportunity. My initial frustration with getting the “wrong” kind of “realize” sentences helped me recognize that learners might face confusion with the word as well. By studying the twenty sample sentences ChatGPT generated, I was able to notice certain language features that could help a reader appropriately use the Tax English or Plain English version of “realize.” For example, Plain English “realize” is often followed by “that” (e.g., “He realized that he was late.”); whereas Tax English “realize” was always followed by a direct object, usually a money-related noun (e.g., “profit,” “loss”), and it was never followed by the word “that.” Students can then be asked to use these grammatical patterns in practice activities with the target term.
ChatGPT was also able to generate a list of collocations for any term. (Fortunately, ChatGPT does seem to have a sense of what collocations are.) Since it was able to conjure a bigger list of collocations than I could off the top of my head, it was a helpful and time-saving tool.
For example, I asked it for collocations for the term “like-kind.” ChatGPT listed ten items, but based on my expertise as an experienced teacher, I suspected that only one or two of those were frequently used. I asked ChatGPT to tell me which of the collocations appear most frequently (see Figure 2); the output confirmed my suspicion that “like-kind exchange” and “like-kind property” were probably the most commonly used ones.
Figure 2: Prompt and Output from ChatGPT for Frequency of Collocations

Step 3: Could ChatGPT Generate Practice and Assessment Activities?
Piece of cake, as it turns out. Based on what I’d read about ChatGPT, I thought I should be able to literally ask it to create and format a quiz or create a practice activity, and it would do it. This turned out to be true, though it took a little tweaking.
Example 1: Using ChatGPT to Create Fill-in-the-Blank Vocabulary Quizzes
I input the following prompt: “Make a quiz with 10 sentences that use either deprecate or a form of the word depreciate. Students must fill in the blank with the correct word.” ChatGPT created a full quiz, as seen in Figure 3, including blanks and a choice between words in parentheses or words at the end of each sentence. I also discovered I could ask ChatGPT to create a version of the quiz both with and without correct answers, depending on my needs. Again, this was a huge time-saver despite needing to review ChatGPT’s output in order to correct or remove an occasional not-quite-right answer or sentence.
Figure 3: Output from Asking ChatGPT to Create a Quiz

Example 2: Using ChatGPT to Create Vocabulary Practice Exposure
ChatGPT can create a variety of practice exposures for students, including using ChatGPT to create short texts, such as a news article, containing some subset of the targeted vocabulary (e.g., ten words students were studying). Figure 4 shows how I prompted ChatGPT to generate a “pretend” news article that met my specifications.
Figure 4: Prompt for ChatGPT to Create a Fictitious News Article

And poof! ChatGPT produced a text that specifically uses – and provides additional exposures to – the exact vocabulary the students were studying (see Figure 5).
Figure 5: Output from Asking ChatGPT to Create a Fictitious New Article

Actually, not quite. Upon review I noticed that ChatGPT didn’t use every key term I requested. I’m not sure why ChatGPT struggled with this so mightily. However, I tried two workarounds: (1) I revised the prompt to specifically request use of the missing terms, and (2) I combined ChatGPT outputs into a final product.
Though ChatGPT still required my expertise as a language educator to create the final product, as in Example 1, using ChatGPT effectively saved me significant time. Most notably, creating a practice text that includes all the key terms would have been very challenging (though not impossible) for me to create on my own. If I wrote the text myself, my sentences may have felt forced. Plus, it would have taken me a fair amount of time to concoct appropriate scenarios for incorporating those terms since I’m not a tax lawyer. (Though, in my defense, I have done my own taxes.) In addition, it is time-consuming (if even possible) to find an authentic text in the wild that, in a compact and efficient way, uses the exact ten vocabulary items I wanted to incorporate.
Takeaway
The takeaway from this experiment is that ChatGPT can help us, as teachers, be more productive and creative in devising materials to help our students learn. My goal was limited primarily to providing multiple natural-sounding exposures to field-specific vocabulary. The text produced didn’t need to be highly accurate, nor did it need to be outstanding writing. It just needed to be comprehensible and make sense both grammatically and substantively.
Additionally, ChatGPT output is unlikely to raise any copyright tripwires as its sentences are fairly bland and generic. They’re not expressing any unique opinions or ideas that may have come from others. For my purposes, the sentences are merely describing tax concepts in an unremarkable way. And I’m fairly confident that no one else has written a fictional news article about a celebrity that incorporates the ten key terms that I listed. (If they have, I really want to meet them!) To be overly cautious, activities generated can include an attribution that the text is adapted from ChatGPT output.
In response to one type of fear raised in response to using generative AI in education, I don’t think it undermines our roles as teachers or usurps our students’ opportunities to learn to think critically. On the other hand, I can’t escape the fact that simply by using ChatGPT, I am feeding the beast. I served as an uncompensated trainer by providing data to train it in the future. I contributed to environmental harm each time I submitted a prompt. And to some extent, I’ve jumped on the bandwagon and potentially fueled greater interest in this impactful technology for which we still don’t fully understand the ethical, societal, and economic implications.
As we move forward, we must grapple with how to balance the potential negatives with the excitement of having ChatGPT help us do our work creatively and efficiently.
References
Autor, D. (Host). (2023, May 25). Is AI coming for our jobs? [Audio podcast episode]. In Stay tuned with Preet. Cafe. https://cafe.com/stay-tuned/is-ai-coming-for-our-jobs-with-david-autor/
Caplan, N. (2023a, April 30). Why I’m not excited by (or even using) generative AI. Nigel Caplan. https://nigelcaplan.com/2023/04/30/why-im-not-excited-by-or-even-using-generative-ai/
Caplan, N. (2023b, May 30). Previously on “Resisting Generative AI”…. Nigel Caplan. https://nigelcaplan.com/2023/05/20/previously-on-resisting-generative-ai/
Caplan, N. (2023c, May 30). Questions to Ask before Using AI in Education. Nigel Caplan. https://nigelcaplan.com/2023/05/30/questions-to-ask-before-using-ai-in-education/
Stephen Horowitz is a Professor of Legal English at Georgetown Law. He runs the Georgetown Legal English Blog and co-hosts the USLawEssentials Law & Language podcast. Connect: LinkedIn or Twitter @gtlegalenglish. |