Humanists as Activists: Rewriting the Narratives that Lurk Beneath Technological Tools

Artificial Intelligence has been a topic of special interest for the BYU Humanities Center during the Winter semester of 2023. With masterful blog posts and colloquium presentations by Brian Jackson from the English Department, Earl Brown from Linguistics, and Steve Richardson from Computer Science, we have been enlightened, challenged, and (somewhat) reassured about our abilities to embrace and cope with the challenges associated with ChatGPT, the artificial intelligence chatbot that was launched by OpenAI in November of 2022. Our College has also assembled an AI Task Force that has produced an especially useful handout to navigate the challenges of writing instruction at the dawn of the ChatGPT era, suggesting that “If we value what writing does for us and our students, we will need to change how we teach writing as generative AI programs become entangled in the writing process” (ChatGPT, 2023, p. 1). One of the expectations is to teach our students to know the difference between the “rhetorically beige output of a bot and the creative power of our best writers” (p. 1).

Though this issue is relevant, it does seem odd that we continue to struggle with the advent of new technologies in the twenty-first century. It is as though we are constantly living in a stereotypical Sci Fi drama that pits seemingly helpless humans against machines. Andrea Guzman explains that “Science fiction across the decades and genres portrays intelligent machines as helpful if they are kept in check, but when they gain control—the most likely scenario—the consequences are dire” (2017, p. 6). I see our current struggle with AI as part of that narrative about an enduring and unending conflict between human and artificial intelligences.

Building on a concept promoted by communications theorist James W. Carey, I concur that technology should be seen “less as a physical contrivance than as a cultural performance: more on the model of a theatre that contains and shapes our interaction than a natural force acting upon us from the outside” (1990, p. 247). As legitimate actors in this cultural drama, we, as humanists, do not have to resign ourselves to the apparent inevitability of technological breakthroughs, scrambling to make whatever accommodations we can to brave the unforeseen consequences of programs such as ChatGPT. And I am not writing this as a Luddite who is summarily dismissing all new technologies. Instead, I am suggesting that we have an opportunity and responsibility to band together as teachers, question aspects of new technologies that impede many of our essential pedagogical goals, and demand that companies such as OpenAI respond to our concerns.

In addition to requiring accountability for some of the effects of AI technologies in the classroom, I am also advocating for a more aggressive interrogation into some of the hidden messages that lurk beneath the surface of technological tools—messages that have remained unquestioned and, therefore, have become normalized. The publication of the book Algorithms of Oppression: How Search Engines Reinforce Racism, written by Safiya Umoja Noble,[1] is a clear explication of how the technological underpinnings of search engines (a technology that we might think is value free and transcends social norms) instead reveals an insidious racism. As a Professor at UCLA, Co-Founder of the UCLA Center for Critical Internet Inquiry, and an African American, Noble fills a vital role in addressing algorithmic discrimination, for which she won a MacArthur Foundation Fellowship in 2022. Taking on what many of us might consider a topic of interest for research in the humanities, Noble is especially poised to address both technology and issues of social relevance in her research. Reading this book allows us to see hidden aspects of racism deeply embedded within a search engine—a place where we might not consider racism to be lurking.

I am arguing that technological advances in AI are not value free and should not be considered independent of the cultural contexts from which they emerge, particularly when they promote profoundly racist and misogynist perspectives.

And there are other troubling narratives that are also firmly entrenched in AI. For example, I have noted that many popular AI chatbots embody some of our deepest fears and concerns about female voices. According to Leslie Dunn and Nancy Jones, “As a material link between body and culture, self and other, the voice has been endlessly fascinating to artists and critics. Yet it is the voices of women that have inspired the greatest fascination, as well as the deepest ambivalence, because the female voice signifies sexual otherness as well as a source of sexual and cultural power” (Dunn and Jones, 1994, front matter).[2] Indeed, female AI singing voices amplify underlying cultural narratives about singing women to a highly charged emotional level that leaves no doubt as to their meanings. And the most prevalent and enduring computer voices today are the voices of Siri, Alexa, and Google Assistant, many of which are gendered female, particularly in the United States (Potter, 2011; Griggs, 2011).

I came to the study of gendered AI voices after teaching a course on the performance of gender in music and theater. After that experience, I wondered if any of these female-gendered vocal assistants could sing. After listening to a variety of YouTube performances of singing chatbots, it turns out that they can. In fact, female singing voices in AI seem to engender even more potent and unfiltered manifestations of the sexual otherness and power spoken of by Dunn and Jones than their human counterparts.

For example, one would be hard pressed to find an icon more intimately associated with the power of the female singing voice than that of the siren. The siren’s representation through millennia of Western siren lore reminds us that “the siren and her sisters may therefore be creatures whose vocal beauty obscures the perils and dangers of embodied union, serving as a metaphor for trusting the ear above the eye” (Austern and Naroditskaya, 2006, p. 5). I suggest that some of the same fascination for and fear of sirens’ voices since the dawn of Western antiquity continue in modern YouTube performances, especially evident in a duet featuring the humanoid chatbot, Sophia the robot, and the American talk show host, Jimmy Fallon, in 2019. Garnering more than 23 million views, many viewers reported feelings chills when listening to her singing, and her performance clearly startled many of her viewers, such as the one who said, “this petrifies me honestly . . . Sophia is deathly scary because you don’t know what she can do.” Or the one who said that “Sophia’s emotions are on point! I’m getting chills yet exhilarated at the same time” (Lawson, 2023, p. 24-25)! Many of the comments demonstrate both the fascination and terror associated with listening to sirens—even robotic ones.

In the end, I am arguing that technological advances in AI are not value free and should not be considered independent of the cultural contexts from which they emerge, particularly when they promote profoundly racist and misogynist perspectives. Further, the clarion sound of the female singing voice in AI has been amplified to a point that it is no longer possible to ignore, especially given the abundant examples of raw and unfiltered responses to robotic female voices on social media platforms like YouTube (Lawson, 2023).

If, as John Colapinto suggests, the goal to enhance connectivity between humans and robots will come to fruition in the twenty-first century (Colapinto 2021, pp. 99-101), I suggest that we call into question the underlying historical blueprints for racial and female misrepresentation. If algorithms can be a blueprint for replicating racism and the performance of a robotic siren can be a blueprint for reproducing misogyny, the time has certainly come for humanistic scholars to act. We must interrogate the underlying cultural narratives that lurk beneath technological tools and demand their rewriting.

This blog post was written by Francesca R. Sborgi Lawson, Humanities Professor of Ethnomusicology and Professor of Comparative Arts and Letters at Brigham Young University.

[1] Many thanks to Rex Nielson for recommending this text.

[2] For examples of the ways in which female singing voices have been feared and challenged throughout history, from castrating young males to create the treble sound of the female voice to disparaging the personal and professional lives of prima donne, see Freitas, 2008; Heller, 2003; Austern, 2006; Rutherford, 2006; Clément, 1988; and Dunn and Jones, 1994.

Works Cited

Austern, L (2006) ‘Teach me to heare mermaids singing’: embodiments of (acoustic) pleasure and danger in the modern West, In: Austern, L, Naroditskaya, I (eds) Music of the sirens. Indiana University Press, Bloomington, pp. 52-104.

Austern, L and Naroditskaya, I (eds) (2006) Music of the sirens. In: Austern, L, Naroditskaya, I (eds) Music of the sirens. Indiana University Press, Bloomington, Indiana University Press, Bloomington, pp. 1-15.

Carey, J (1990) Technology as a totem for culture and a defense of the oral tradition. American Journalism 7(4): 242-251 https://doi.org/10.1080/08821127.1990.10731305, Accessed 2/14/2023.

ChatGPT and the Near Future of Writing Instruction (2023) Brigham Young University College of Humanities AI Task Force * Feb 2023  (beta)

Clément, C (1988) Opera, or the undoing of women. Foreword by Susan McClary. University of Minnesota Press, Minneapolis.

Colapinto, J (2021) This is the Voice. Simon and Schuster, New York.

Dunn, L, Jones, N (1994) Introduction. In Dunn, L and Jones, N (eds) Embodied voices: representing female vocality in western culture. Cambridge University Press, Cambridge, pp. 1-13.

Freitas, R (2009) Portrait of a castrato: politics, patronage, and music in the life of Atto Melani. Cambridge University Press, Cambridge.

Griggs, B (2011) Why computer voices are mostly female. CNN.com Retrieved from LexisNexis Academic, https://www.cnn.com/2011/10/21/tech/innovation/female-computer, Accessed on 2/21/2023.

Guzman, A (2017) Making AI safe for humans: a conversation with Siri. In: Gehl, RW, Bakardjieva, M (eds) Socialbots and their friends: digital media and the automation of sociality. Routledge, New York, pp. 69-85.

Heller, W (2003) Emblems of eloquence: opera and women’s voices in seventeenth century Venice, University of California Press, Berkeley.

Lawson, F (2023) Why can’t Siri sing? Cultural narratives that constrain female singing voices in AI. Unpublished manuscript.

Noble, S (2018) Algorithms of oppression: How search engines reinforce racism. New York University Press, New York.

Potter, N (2011) Why are computer voices female? Ask Siri. ABCNews online. http://abcnews.go.com, Accessed on 2/21/2023.

Rutherford, S (2006) The prima donna and opera, 1815-1930. Cambridge University Press, Cambridge.

Popular Articles...

Leave a Reply