ChatGPT and other chatbots like Claude and Gemini have seen stunningly rapid adoption in the last year or so. You probably remember all the AI-related words in our word-of-the-year episode a few months ago. And a January survey by MuckRack found that 64% of PR professionals were already using AI at work, and a more recent February survey by Pew Research Center found that even among all employed Americans, which would include people who do almost no writing or editing — even in that huge and diverse group, 20% say they’ve used ChatGPT at work.
I absolutely believe it’s going to be one of the most important developments in my lifetime — good and bad — affecting writers, editors, teachers, and other people I’d loosely call knowledge workers or creative workers. And I’ll have some upcoming interviews with people about AI. But for today, instead of philosophy, I have small, nit-picky bits of advice about how to cite AI.
The editors of The Chicago Manual of Style, the MLA Handbook, and the Publication Manual of the American Psychological Association have all published blog posts on how they want writers to format this kind of citation.Â
And note that these guidelines are not for creating whole papers from AI, which is something you clearly should not do, but for when you want to cite a piece of information you got from a tool like ChatGPT.
Chicago
Chicago says it’s often enough to cite the tool in the text by writing something such as “The following limerick was generated by ChatGPT.” But if you need a more formal citation, it says to treat the tool (such as ChatGPT or Claude) as the author and the company (such as OpenAI or Anthropic) as the publisher. For the date, you use the date you generated the material, and then you put the URL at the end.
An important thing to consider is that depending on which tool you use, you may or may not be able to give readers a URL that will allow them to see the exact output you received, but if you can get that, you should include it in your citation. So it’s important to get into the habit of saving that URL as you are doing your research, just as you would when you are gathering information from a website.Â
I don’t think that’s something most people think of doing yet as they’re chatting with a bot, so if you think you’ll be using your output in any kind of publication, remind yourself before you start that you’ll need to save that URL. Some tools, like ChatGPT Plus, also let you save and name your conversations, so you may be able to go back and retrieve the URL later, again depending on what tool you’re using.
The simplest citation for Chicago style, if you’re just putting a footnote or endnote on that line I said earlier about a limerick being generated by ChatGPT would read:
- 1. Text generated by ChatGPT, OpenAI, April 4, 2024, https://chat.openai.com/share/c8ae8128-145c-417d-915b-96ade7821581
Chicago also gives a second option if you want to include the prompt, which can be a good idea. But sometimes the prompts get really long, so I’m not sure how workable that’s going to be in every case. But … if you want to include it, Chicago recommends formatting the citation to say:
- 1. ChatGPT, response to “Write a limerick about the Chicago Manual of Style,” OpenAI, April 4, 2024, https://chat.openai.com/share/c8ae8128-145c-417d-915b-96ade7821581
MLA
The MLA’s blog post on the topic has some useful advice at the beginning, noting that you should cite generative AI for anything you include that was created by it — text, image, data, or something else.Â
They also recommend including a citation when you take that material and quote it, paraphrase it, or incorporate it into your own work. Further, they say to acknowledge all functional uses of the tools in a note, including using AI to edit your text or using it for translation.
And finally, they say to vet the secondary sources you get from AI. For example, sometimes, AI will give you sources with links for the information it gives you. The MLA says if you want to use that information, click through on the link, and then just use the information in that source and cite that source in the same way you would if you found a source through a Google search. Chat GPT was just the conduit to the information, not the source of the information in that case.
When you get to actually creating the citations, MLA points you to their core citation elements, which they adapt to all different kinds of citations. These are fields such as title of the source, title of the container, date, version, and so on.Â
They say to include your prompt as the title of the source, and to treat the tool itself as the title of the container.Â
Date is tricky because they say you use the date you generated the response as the date, but then in their examples, for the version element, they also use the date of the ChatGPT version, so their examples actually have two dates in the citations. Here’s an example. You start with the prompt in quotation marks and then write “prompt” after it:
“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
And they are flexible. They basically say that as long as you can make an argument for how the elements fit into their standard set of citation elements, you’re fine.
APA
Moving on to APA style, it has the simplest citation format, but some interesting additional advice.
First, the citations they show just write:
OpenAI. (2024). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
They don’t ask for the prompt in the citations, but say you should include the prompt in your text or in your methods section. They note you could also include your prompts and the full text of the responses in an appendix or in supplementary material. They are the only one of the three style guides I’ve seen that provide guidance acknowledging that both the prompts and the responses can get really long sometimes.
And like the MLA, the APA also recommends going to the original sources if you can, treating chatbots more like a search engine than the source of answers.
Cautions
And on that note, I wouldn’t feel right about this post without cautioning you that AI chatbots often provide inaccurate information, and they do it in a convincing and plausible-sounding way. For example, when I asked ChatGPT to write a bio for me, it said I have a degree from UC Santa Cruz in linguistics. Well, I did take classes at UC Santa Cruz, and I do kind of work in linguistics, but I do not actually have any degree from UC Santa Cruz, nor do I have a degree in linguistics. But given my background, if you didn’t know the details, you could easily believe that I did. So you have to be really careful when using information you get from ChatGPT, Gemini, Claude, and so on. You can’t just believe them no matter how plausible the answers sound. You have to confirm everything.
Finally, there’s more detail in the blog posts from each of the style guides about specific situations and requirements, so you should definitely check those out using the links above in each section if you need to get your citations exactly right, but I hope this gives you at least a good sense of how you should go about crediting your source when that source is a chatbot.Â
A limerick
If you’re curious about the limerick ChatGPT generated about The Chicago Manual of Style, here you go:
In Chicago, where styles are a mile,
They crafted a manual with guile.
With commas and quotes,
[And] Footnotes in boats,
Their rules make the editors smile.
That was its first try. I added one word to make the meter work better, but it seemed to bring in the idea that Chicago has the Magnificent Mile and boat tours, which I thought was pretty cool.
But lest you get too impressed, I’ll also tell you you that I first tried over and over to get it to make an example with a fun name for the cardboard tube that’s left over when you’ve used all the paper towels, and after probably 10 tries I still didn’t have anything remotely usable, and I gave up and switched to the limerick idea. Sometimes AI is great. And sometimes it’s not.