This post was written by Carlee Schmidt, HC Undergraduate Fellow
I didn’t know the Creature had a voice.
I also grew up thinking the towering green figure was named Frankenstein, when actually that’s the name of the doctor who created it. My visual memory is encapsulated in a large plate my mom used during Halloween time as a kid: squared head, scars on the green smiling face, and the classic bolts in the neck. He was a festive Halloween decoration, nothing more.
Until I saw Benedict Cumberbatch portray him in a filmed stage production of Frankenstein, shown for one night only as a Halloween special at the local Provo Cinemark just a couple weeks ago. This production was purposefully more true to the original story written by Mary Shelley – in particular, the directors wanted to revive the voice of the intelligent, philosophical Creature that was stolen in Hollywood’s productions to become a silent, dumb, and slow representation.
The Creature struggles to discover himself, to find goodness in humanity, to be seen truly; he is never accepted by society, never loved by his creator, and never receives the companionship for which he fervently pleads.
He asks troubling questions, but even more, his very existence asks questions. What constitutes creation? What are the moral limits (and obligations) of man meddling with the science of life? At what point do we consider some other living thing sentient as a human? And when we do, how do we treat it?
The poignancy of these questions is deep because of what I’ve learned about artificial intelligence (hereafter AI) recently. Even though we don’t have humanoid robots as house maids, AI is entering the realm of our daily lives: Spotify selects my Discover Weekly playlist based on what I have listened to in the past using machine learning. With the industrial revolution, we began automating physical labor; now, we are starting to automate thinking.
Right now, those common kinds of AI are limited to the commands humans give them. However, as people are investing tens of billions of dollars into the development of neural networks, the stuff of science fiction is on the horizon: artificially created intelligence that surpasses humanity’s thinking capacities. When the likes of Stephen Hawking and Bill Gates are publicly expressing deep concern about AI safety, it starts feeling more real. (Read more at 80,000 Hours, Open Philanthropy Project, and Global Priorities Project if you need a bit more to help this sink in.)
This was numbingly scary to me, until I learned that the way that they are learning has parallels to the way we experience, develop, and process emotions. After interacting with information – guessing that a picture has a 5 in it, and being given feedback as to how correct that guess is – the AI more accurately guesses, but in a manner that reflects a “gut feeling” not unlike our own. Information goes in, is condensed based on the essence of remembered experience, and then new information incites a quick reaction not based on sequential logic.
But isn’t our capacity for emotion what makes us human? If AI develops emotion, what does that classify them as? Should we refer to AI as “it” or “them” or what?
I have so many questions and no satisfying answers. Neither did the Creature, nor the production that gave him his voice back. But I know that we’re not the only ones who have asked these questions.
I recently heard someone explain the word “dialectic” as “exploring the possibilities within the space of an idea.” I think this is a huge benefit of the humanities – I can’t quite wrap my head around AI, but I can immerse myself in the world of Frankenstein. We can explore these questions in an engaging, graspable way because of a story, because we can talk in terms of Dr. Frankenstein and the Creature instead of abstract possibilities.
So the next time someone questions your major or your career, show them this: