Alexa, I’m lonely
Writer Judith Shulevitz talks about her recent Atlantic cover story exploring the appeal of voice assistants and the risks they pose.
View Transcript
Speaker 1: Major funding for BackStory is provided by an anonymous donor, The National Endowment for the Humanities, and the Joseph and Robert Cornell Memorial Foundation.
Brian Balogh: From Virginia Humanities, this is BackStory.
Brian Balogh: Welcome to BackStory, the show that explains the history behind today’s headlines. I’m Brian Balogh.
Nathan Connolly: I’m Nathan Connolly.
Ed Ayers: I’m Ed Ayers.
Nathan Connolly: If you’re new to the podcast, we’re all historians, and each week, along with our colleague Joanne Freeman, we explore a different aspect of American history.
Ed Ayers: If you’re one of the millions of Americans who owns a smart speaker, you already know how it can make your daily life a bit easier.
J Shulevitz: It does seem more convenient to have a thing in your home that can, for example, tell you the steps of a recipe. Whereas otherwise if it was online and you were cooking, you had to like wipe your hands and then type into your computer or punch in the code on your phone or clean your thumb so you could, you know, all of that took a lot of time.
Ed Ayers: That’s journalist Judith Shulevitz. She recently wrote about the rise of smart speakers and voice assistance in The Atlantic. In her own life, she’s not only found her Google Assistant convenient, but she noticed she also started developing a kind of personal relationship to it.
J Shulevitz: The voice sort of enters us more deeply, and more physically, and we form relationships with voices. Evolutionarily speaking, for hundreds of thousands of years, if we heard a voice, it meant that a person was nearby. Only with the advent of the recorded voice did the voice become detached from a body, from a fellow presence. So we are evolutionarily designed to respond in this kind of physical way to voices.
J Shulevitz: So, it’s very hard for our brains not to process a voice, even a computer voice, as a sort of appeal from another human and react to some degree emotionally and physically. So they have a greater presence. So even I have found myself saying to my Google Assistant, you know, I’m lonely and it will say, “I wish I could give you a hug, but for now, let me play you a song.” So, you know, it’s a kind of simulation of companionship and it can kind of do the job.
Ed Ayers: Today we probably still laugh when we momentarily catch ourselves talking to our virtual assistants as if they were somehow real. But technology is currently being developed to deepen our emotional attachment to these very devices.
J Shulevitz: There is a very hot new field in artificial intelligence which deals with artificial emotional intelligence and there’s a lot of research being done on what’s called emotion detection, how through machine learning, computers can learn to analyze your body language, your voice intonations and your facial expressions to figure out what you’re feeling, and they can do this with a very high degree of precision, so that they can do it as well as we can and in some cases better. And that’s already happening.
J Shulevitz: And pretty soon these researchers are going to be able to figure out how to create simulations in artificially intelligent devices, and produce emotionally appropriate responses. So you’ll have a kind of back and forth. Right now, Alexa cannot read your emotions or Google Assistant cannot read your emotions and cannot respond at the emotional level. Once they learn to be able to do that, I think it’s gonna be unbelievably hard not to react to them as if they were really human, and form real emotional bonds.
Ed Ayers: And, on one hand, an emotionally intelligent voice assistant will certainly make our lives simpler, easier, and as they say in Silicon Valley, frictionless. But it’s also well, kinda creepy.
J Shulevitz: If you have a wish, and your assistant can almost anticipate that wish and fulfill it immediately, wouldn’t that be kinda dangerous? If you have an emotional bond with an entity that is actually there to sell you stuff, wouldn’t that be dangerous? If you had an emotional bond with an entity that was somehow related to the government and had power of persuasion over you, wouldn’t that be dangerous? So this frictionlessness, I think, has a down side. It also has an upside, I mean, it is frankly, easier just to talk to an artificially intelligent entity than to tap on your computer. But I think the downside outweighs the upside.
View Resources
Technophobia Lesson Set
Download the Technophobia Lesson Plan
This lesson set uses the Inquiry Design Model (IDM), a distinctive approach to creating curriculum and instructional materials that honors teachers’ knowledge and expertise, avoids overprescription, and focuses on the main elements of the instructional design process as envisioned in the Inquiry Arc of the College, Career, and Civic Life (C3) Framework for State Social Studies Standards (2013). Unique to the IDM is the blueprint, a one-page representation of the questions, tasks, and sources that define a curricular inquiry.
This lesson asks the compelling question How do people react to rapid technological economic change? and instructs students to write, using specific historical evidence, a response to the following questions: How did American’s respond to the rapid changes of the Market Revolution? What changed and what stayed the same?
In addition to the C3 Framework, it uses both AP US Thematic Standards and AP US Content Standards.