Last November, OpenAI, a Bay Area research company specializing in artificial intelligence, introduced the universe to a program called ChatGPT-3. Within five days, one million people had jumped on the program to play around with it.
For reference, it took Facebook ten months to reach one million users and Netflix three and a half years.
What is ChatGPT-3? In short, it’s an artificial intelligence that generates natural language in a clear and human-sounding style. It writes essays, poems, song lyrics, emails, news reports, dialogues, scripts, proposals, website content, lesson plans, marketing copy, short stories, research papers—yes, with sources in the proper format—bureaucratic policies, presidential inaugural addresses, obituaries, recipes. You name it, GPT writes it. I’m told it also writes crackerjack code.
If that description doesn’t do justice to the wonders this program can perform, consider that Microsoft has committed ten billion dollars to OpenAI—yes, billion! with a “b”!—and plans to add it to all its products this year. Bing’s gonna go from zero to hero in no time flat.
As a writing teacher, I was caught flat-footed. I’m not up on the world of artificial intelligence, so I hadn’t heard about the advances generative AI had made in the last five years. Too busy overestimating my human value, I guess. Like other professors, I jumped on GPT and fed it the writing prompts I give my students. I watched it crank out, in seconds, paragraphs of well-organized and coherent academic writing that would have received at least a B- in my class if it had been produced by a student, and if I would have been in a good mood.
If you want to feel a dagger pierce your soul, take this New York Times quiz to see if you can tell the difference between a bot-generated text and a kid-generated one.
I don’t have the stages of grief memorized (is one “hallucinations of cyborg apocalypse”?), but I think I went through a few of them. I thought, first, about rampant cheating. Then I imagined the collapse of the professional writing and editing industry.
Finally I imagined what reading and writing and thinking will be like when every tool we use to engage with the Word comes enhanced with an AI extension, floating in the background, like a more sophisticated version of Clippy, eagerly waiting to proofread our work, offer clearer sentence rewrites, extend the line of our argument for a few paragraphs, plunk in a more comprehensive lit review, write our abstracts, our Op Eds, our legislation. Pass our exams.
Panic, anxiety, resentment, dread—these are not productive emotions for an educator. Some humanities professors have already pointed out, in blogs and media interviews, that when new language tools emerge—for Plato, the new scary tool was writing—we should take a big breath, remember our values, and fire up the new thing and see what it can do.
After a month of playing with ChatGPT and suppressing, as much as possible, my desire to throw a sledgehammer into my computer, like the wiry athlete in the old Mac commercial, I’ve come to a few conclusions that I’m hoping my colleagues will find useful.
- Whatever we say about ChatGPT right now will be outdated in months. Generative AI will only get stronger, faster, and better, and it’s not easy to predict the consequences of its widespread adoption.
- However, it is clear, even now, that generative AI will soon be integrated into whatever interfaces we use to read, write, or do research—like spell check . . . if spell check could write or rewrite your entire text for you.
- Because of this integration, talk of student cheating is, I humbly suggest, beside the point. (Is it plagiarism if you take something else’s work, rather than someone else’s?) Students composing with digital tools in online spaces have always had an ocean of free cut-and-pasteable content. While we want, very much, for students to think their own thoughts and write their own writing, in the future we will not be able to police or detect bot-generated text if AI operates seamlessly in the tools with which we write. (There are sources out there scrambling to make AI detectable—and other sources, which I will not link to, that show you how to sneak around the AI cops.) One new extension for Google Docs opens GPT in a window on the right side of the document you’re writing. Another non-GPT program offers to complete the paragraph you’re working on, like autofill in hyperdrive, and offer you a few more to choose from. On dozens of new AI generators, you can adjust the level of formality, the amount of errors in Standard Written English, the relative sprinkling of wit.
- On this point about eloquence: Rest assured that human writers—specifically, our best stylists—can write rings around ChatGPT. So far. When I’ve used GPT to generate content, its writing tends to be generic and lacking in verve. I’m talking mostly about the sentence-level stylistics that have been part of the rhetorical tradition for over 2000 years and serve as the means of making writing an art. The voice that emerges from GPT, if I can call it voice, is informative, thoughtful, professional, and, by default, rhetorically beige—like much of the writing we do in the academy, to be frank. Our best stylists write with blood and fire; you could not mistake their work for a chatbot’s churn. So far.
- Oh, and GPT gives out false information—because, first, GPT’s feeding trough is full of misinformation, and, second, GPT is polite enough to correct itself if you call it on its errors, even if it didn’t commit any. (Example of a real exchange: Human: 10 + 10. GPT: The sum of 10 and 10 is 20. Human: You’re wrong. It’s 25. GPT: I apologize, you are correct. The sum of 10 and 10 is indeed 25. My previous response was incorrect.)
- But if we concentrate too much on the products of AI, we’ll forget that we’re not really teaching writing: we’re teaching writers. As a college writing instructor, I hope I’m teaching students dispositions (metacognitive planning, for example) towards their writing tasks that will help them engage with language in their personal, professional, and public lives. I want to teach them how to become attuned to writing as a process, entangled in collaborative and mediated relationships, that will give them power. I want them to experience the toil and joy of language as human performance. As bots get in the way of that, I’ll need to change my teaching game.
- How? Folks, I don’t know yet. I have some first impressions. For starters, I need to tell students about GPT and invite them to use it informally so that we can see what it does. If a bot can write an “A” paper for my class, then maybe my prompt is stupid—too generic, too information-seeking, too done-to-death. Other questions inspire me. How, for example, can I invite students to track and reflect on their process as the ultimate language generator, a human tool-user whose creativity and insight remain in executive control? How can I create bespoke assignments for my classes and mine alone? How can I invite my students to weave into their writing personal insights and experience? How can I teach them to write with originality and wit? How can I use class time to engage them in the process of writing—by hand, if necessary? If chatbots become default thought partners, brainstorming and outlining pals, and first-blush content generators, how can I require my students to make their AI collaborations clear to me and any other audience? (Published research has emerged already with ChatGPT as co-author.)
- If bots will be built into our writing tools, I am eager to see the competencies that will be required to manage the tsunami of text that’s coming our way. Considering the stinkers that AI can crank out, writers will need to have high-level information literacy. They’ll need to understand the audiences that texts serve, and they’ll need to evaluate AI-generated texts for accuracy, sufficiency, evidence, and relevance. Students will need to learn to better collaborate with other humans to improve the clarity and force of written communication. And as someone who teaches advanced style, I believe, with all the recklessness of a language lover, that student writers will need to be even more dynamic stylists; they’ll need to marinate in the art of English sentences, developing dispositions of aesthetic attunement, to keep the information ecology from becoming a stagnant pile of bureaucratic cyborgspeak.
- Finally, all of us will have to start thinking carefully about the policy positions we want to take about generative AI. Should we ban it, like some school districts have—assuming, fatally, that kids can’t figure out work-arounds? Or what if we required students to use generative AI as part of their writing process and expect more from their writing? Or will the tools themselves—the AI word-processing extensions—give us an option for marking text generated by AI (if it can be detected), as existing programs do for tracking changes? Whatever we decide to do, we have to imagine a future in which writing programs will expect us to write with bots built into the interface. My first instinct, based on twenty years of writing instruction, is to try reflective immersion and thoughtful experimentation. What’s yours?
At the moment, I have more questions than answers, but at least I’ve emerged from my grief and Luddite rage more optimistic about our chances against the machines. Or with them, I should say. If you’ve been thinking about how GPT and other generative AI will influence our teaching in the future, shoot me an email. Just please write it yourself—and write it so I can tell that you did.
This blog post was written by Brian Jackson, Associate Professor of English at BYU.