The threat of robots overtaking the workforce and making humans superfluous would seem ludicrous decades ago. Now, while still seemingly a stretch, it doesn’t quite hold the same level of outlandishness as before. For example, J.P. Wright discussed his shifting job as a locomotive engineer. He states that the work that was once done by seven people is now controlled by one who pushes buttons on a box. Cole Stangler, author of the article, shows how the workforce will shift its jobs as the world becomes more technologically advanced, yet he acknowledges the ancient debate behind the roboticizing world and notes that we still do not exist in a world run by robots. He also articulates that one of the reasons robots have not replaced us is the barriers to eliminating human jobs, including the existence of unions and work contracts. However, the article ends with these ominous words: “In spite of barriers like these, Martin Ford believes the robot age is inevitable. He acknowledges ‘there have been a lot of false alarms’ but likens his camp to the little boy who cried wolf. ‘Eventually,’ Ford says, ‘the wolf does show up.’”
This is not the only ominous tale of a robot takeover. In May 2015, Kathleen Elkins of Business Insider reported that “Experts predict robots will take over 30% of our jobs by 2025—and white-collar jobs aren’t immune.” Harry J. Holzer suggests that with the increase of minimum wage to $15 dollars, “fast-food workers might be more easily replaced by robots.”
How have advocates of the humanities responded to these robot crises? By emphasizing the inability for robots to replace human emotion, experience, and intelligence—in short, their humanity. Jonathan Malesic goes as far as envisioning a world without work: “If universities are to prepare people for a world without work, then . . . they will have to shift focus away from training students for dying professions and toward building their knowledge in fields that a postwork society will actually need: arts, literature, politics, religion, and ethics.” Such a take on the future world may have some merit, but seems a bit apocalyptic. However, Malesic does raise an interesting point: robots will not be able to engage in creating a humanistic experience. For that, we need humans.
There are simply some things that robots or digital computations will not be able to replace. One such example is the conveyance and understanding of language. In an attempt to dissect human language, IBM’s Watson Developer Cloud team has created a Tone Analyzer, which “uses linguistic analysis to detect and interpret emotional, social, and writing cues that are located within the text.” The Tone Analyzer can now do the work of social interaction by eliminating the need for one to think ahead when writing. Analyzing things such as emotion tone, social tone, and writing tone, the Tone Analyzer will show percentages of each of these categories, offering a thesaurus of words that can help you create the type of persona you wish to convey to those you are writing to.
The simplicity of such a device takes away from the deep nuances of language to an almost laughable level. Analyzing generic words and phrases for their agreeableness, conscientiousness, or openness does not take into account the syntactical significance of sentences, nor does it analyze the deeper nuances of language. Understanding the nuances of language is a lifelong pursuit. The simplicity of learning the basics of a language allow people to communicate on a very superficial level, which is why it is difficult to fully express oneself in a language that has not been mastered. Words are more than their denotations; connotation creates an effect on language that elevates the sophistication of speech. As people develop their linguistic abilities, they become more apt to pick up on subtle nuances of language such as metaphor, simile, double entendres, etc. Language is powerful and has the ability to elicit strong emotions, beliefs, and ideologies. A machine simply cannot pick up on that. Such a debate is not new to those who study languages. David Bellos writes about the importance of human translators over Google Translate, arguing that Google Translate “is not conceived or programmed to take into account the purpose, real-world context or style of any utterance.” The nuances of language, according to Bellos, are hard for even a human to interpret. Thus, machines are not equipped to analyze how language comes off to an audience because they are not equipped with the social contexts that enhance language and create understanding.
If you find yourself skeptical about such an assertion, you can test the Tone Analyzer for yourself. I tested it out with a belligerent email I had received from a first-year writing student. According to Tone Analyzer, his email included 85% social tone, including 25 words of Agreeableness, 20 words of Conscientiousness, 40 words of Openness, 7 Analytical words, 5 Tentative words, 2 Cheerful words, 1 Negative word, and 0 Angry words. With this assessment, the Tone Analyzer missed the mark even more than the student, which isn’t really that surprising since it only analyzes generic words and phrases. When the Tone Analyzer can assess crucial rhetorical tools such as context and audience, and, for that matter, tone, perhaps I will consider utilizing it as a resource in constructing written communication. Until then, I think I’ll stick with studying the nuances of language for myself.
By Brittany Bruner, Humanities Center Intern
Photo: IBM Watson by Clockready