A complicated piece of machinery with turning circular cogs and a chain

The Work of Humanities in an Age of AI Production

This essay was written by Brian Croxall, a BYU Humanities Center faculty fellow.

As someone who has been—by one measure or another—very online for about 35 years, it’s fascinating to see technologies come and, sometimes, go. Back in February 2021, the venerable art auction house Christie’s sold its first ever “purely digital work of art.” The artist was Michael Joseph Winkelmann, who is known professionally as Beeple, and the work in question—Everydays: The First 5000 Days—was a non-fungibile token or NFT. Fetching more than $69 million, the sale brought attention to an unexpected use of blockchain technology. Suddenly, crypto technologies were for more than speculative investments and money laundering; you could now do both of those while simultaneously “collecting” art.

A building-sized replica of a retro Gameboy being constructed, as crowd of people gather beneath it
Beeple’s Everyday from July 21, 2020

Since I teach digital culture as part of my work in digital humanities, during the Fall 2021 semester, I reworked the syllabus so we had a day to talk about NFTs. The subject was still on the syllabus in Fall 2022, but at that point it already felt cringe, as the kids say. The following year, I quietly dropped the subject altogether.

On November 30, 2022, a mere twenty days after my last discussion with students about NFTs, OpenAI introduced its generative AI chatbot, ChatGPT, to the world. It could have been plausible for this application of the large language model (LLM) to vanish from the zeitgeist as quickly as the Bored Ape Yacht Club. Of course, we know that that very much didn’t happen.

An ape with blue fur wearing a black and white stripes shirt and a fez stared forward boredly, also sporting a pair of red and blue 3D glasses
Bored Ape #7743

Instead, generative AI (genAI) has been the central topic of conversation for the last three years in my academic circles. Annually, at both the Modern Language Association Convention and the international Digital Humanities conference, it’s been possible to move continuously from one AI session to the next. At least one full book on the impact of generative AI in the college classroom appeared less than nine months after ChatGPT’s launch, and I can attest that plenty more continue to appear. The number of think pieces published by The Atlantic alone would outstrip any syllabus’s attempt to contain them.

Portrai of Walter Benjamin, a well-dressed man with glasses and a small mustache
Portrait of Walter Benjamin

I had been aware of some of the developments in GPTs prior to this November 2022 launch, having played with AIDungeon (which was developed by Nick Walton, a BYU computer science major) as well as Dall-E, OpenAI’s art generator that preceded ChatGPT by several months. In fact, if you look at my 2022 syllabus, you’ll notice that the final appearance of NFTs coincided with the first appearance of genAI, as well as Walter Benjamin’s “The Work of Art in the Age of Mechanical Reproduction,” for good measure. By the beginning of the Fall 2023 semester, I had not only added readings about LLMs to my syllabus, but I had reworked the course’s central writing assignment to require the students to start writing with genAI. I suspected that some of them wouldn’t like the idea, if only because new assignments are never appealing to students. (This isn’t a lack of curiosity on their part or indicative of laziness, by the way; instead, it’s a rational response: they already know how to do assignments that they have done before and they have a strong sense of how they will be rewarded in connection to how much effort they put in.) But since, in the words of Annette Vee, Tim Laquintano, and Carly Schnitzler, “generative AI is the most influential technology in writing in decades,” it seemed ridiculous not to use it in our writing.

Fast forward to the present, skipping over several iterations on that assignment, and I find myself teaching a new class called “AI for the Humanities” this semester. This a broad collaboration across the College of Humanities, as it is listed not only in Digital Humanities, but also Editing and Publishing, English, and Interdisciplinary Humanities. The course is aimed at juniors and seniors who are majoring in the humanities and will in short order be moving from the university and into the workforce where, I assume, they will have to argue for why company X or Y should hire them, a real live human who needs a wage, rather than outsourcing particular tasks to genAI. To this end, the motto we’ve adopted for the class comes from 1 Peter 3:15: “and be ready always to give an answer to every man that asketh you a reason of the hope that is in you with meekness and fear.” But we are also not assuming that genAI is good for nothing; we are using it as much as possible so we can test the limits of what it can and cannot profitably do.

Carved sculpture of Aristotle
Sculpture of Aristotle

In the first month of the class, we have been reading from philosophy, psychology, education, and scripture (Abraham 3:18-22, D&C 93:29-37, D&C 130:18-19) to try to get a handle on what intelligence actually is. If we know that, my reasoning goes, we can then try to figure out whether machines actually have it or not. A lot of this reading is unfamiliar to my students, as well as to me. After struggling my way through Book 6 of Aristotle’s Nicomachean Ethics and the related Wikipedia entries, I asked a genAI to give me a summary of the book’s different sections. Triangulating between what I’d read in all three locations, I ensured that I did, in fact, know the differences between nous, phronesis, and technē.

My students got a bit more creative with their use of the LLMs. One of them asked Anthropic’s Claude to stage a dialogue between Aristotle and Joseph Smith in the afterlife about their contrasting conceptions of intelligence. Another asked for an explanation of phronesis, or prudence, in the voice of a TikTok influencer:

Hey bestie! Let’s talk about prudence—but not the boring kind your grandma mentions. Aristotle actually had THE framework for living your best life, and I’m obsessed.

So here’s the tea: prudent people aren’t just book-smart. They know how to deliberate well about what’s actually good for THEM. Not just “should I meal prep?” but “what choices lead to me thriving overall?” It’s that main character energy where you’re making decisions aligned with your highest good.

A few days later, a student got the genAI to create a Taylor Swift song about Howard Gardner’s theory of multiple intelligences. GenAI excels at this type of textual mashup; Ted Chiang describes the ability as “a ‘blur’ tool for paragraphs instead of photos.” Part of what is stunning about these outputs are the speed at which they are generated, but the other is how close they get to something that you didn’t know you wanted. But for all their facility at smearing voices across and into one another, genAI never comes up with these ideas on its own. It might be generative in one sense, but this generation only happens in response to the interesting sparks that my students have been feeding it. The LLMs can produce text—and they are very good at this—but they do not know what they are saying, behaving more as stochastic parrots rather than anything that approaches human intelligence.

In the coming weeks, my students will continue to write with—and against—LLMs. But we will also learn more about how genAI is trained, the legal landscape that they operate within (or in contradiction to, as mass plagiarism devices), and how they are reshaping the work of libraries and archives. We will even take a crack at fine-tuning a model and using it to vibecode for us, without my providing them with any instruction in programming. I am very much making this up as I go, a pedagogical high-wire act that is equal parts terrifying and thrilling.

Leonardo da Vinci's painting, Mona Lisa

Looking back, it’s easy to tell that, while they still exist, NFTs were a million-dollar flash-in-the-pan. GenAI seems to be different at an order of magnitude, whether we are looking at dollars or years spent. What I don’t know yet is whether my “AI for the Humanities” class will be successful or whether there will be any call to teach it again a year from now. Perhaps I won’t know until some of my current students graduate and start looking for jobs. And perhaps I will never know. The uncertainty in this prospect—both of what I’m doing in my next class session and the endeavor as a whole—might be one of the most deeply human things about it. Humans are messy and complicated, whereas I have never had a LLM respond to me in anything less than full-throated certitude. [1] Why is the Mona Lisa smiling? We don’t know, but I suspect that Leonardo didn’t know either, and that’s why we continue to want to talk about it. Once we reach the end of the semester, I am convinced that my students will still be uncertain about genAI and their relation to it. And I wouldn’t have it any other way.

 

References

[1] Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep? suggests that the difference between humans and androids is the capacity for empathy that the former possess. But the novel is at great pains to show that all of the humans within its pages are anything but empathetic. Instead, I think it is the lack of certainty that Dick’s humans face that marks them as real.

Popular Articles...