The “dark heart” of human-robot companionship

The time to carefully examine the ethics of human-robot interaction is now—before it becomes “the new normal.”

A child looks at humanoid robot "Nao" doing math at a robotics workshop during near Paris in July 2014. (CNS photo/Philippe Wojazer, Reuters)

Professor Sherry Turkle, founder and director of the MIT Initiative on Technology and Self, has studied the psychology of human interactions with computational artifacts since the 1970s. In the nascent years of the 21st century, she argues, society has reached a “Robotic Moment.” This crossroads is defined not by the actual manufacture of relational robots, but by the fact many in Western society—particularly parents and eldercare providers—are ready to take these sociable robots into their homes and geriatric centers, allowing them to live with and talk to their children and the elderly as friends, companions, and mentors. While robotic eldercare in America is definitely in the experimental stages, the situation of eldercare robots in Japan, for example, is much more advanced. Twenty-five years ago, the Japanese government projected a shortage of young people who would care for the elderly; to solve this difficulty, the Japanese decided against using immigrants or foreigners as caregivers and opted, instead, to employ sociable robots. This is the time, Turkle insists, to discuss the wisdom of robotic companionship for our kids and elders, before it becomes the new normal.

Toward that end, Turkle and this essay raise questions that lay bare the “dark heart” of human-robot companionship: Why don’t we all care that, when we allow children and seniors to pursue human-robot conversations with the promise of empathetic connection, we encourage them to pursue a psychological fantasy with an unhappy trajectory? Why don’t we think our kids and elderly deserve better than artificial emotional relationships? Why don’t we humans, instead of asking more of robotics, ask more of ourselves when it comes to caring for our children and our elderly?

Meet the Robots, and Human-Robot Interaction studies

The simplified precursors to “Kismet” and “Cog”

Relational artifacts include complex research robots such as Kismet and Cog (to which we’ll be introduced presently), as well as a variety of objects that have found their way into the retail world: humanoid dolls, virtual creatures, and robotic pets. These computational machines are called “relational” since, with differing levels of sophistication, they not only give the user the impression they want to be cared for and to have their “needs” satisfied, but also that they are grateful when the user appropriately nurtures them.

In 1996, American kids met their first relational artifact with Tamagotchi, a small virtual creature whose screen is framed in egg-shaped plastic. Tamagotchi’s instruction book includes a telling moral mandate to the child-recipient regarding the care of the cyber-pet: “You must feed or play with Tamagotchi…. If you keep Tamagotchi full and happy, it will grow into a cute, happy cyber-pet. If you neglect Tamagotchi, it will grow into an unattractive alien.” Interestingly, the US version provided a “reset” button where children were presented with another creature should they fail in their nurturing duties with the first. Acceptance of their caretaker role was the critical step in creating a bond between the child-owner and Tamagotchi.

Furbies were the toy fad of 1998-99 and were presented as visitors from another planet who spoke “Furbish” when first brought to life. They are small furry creatures with large, prominent eyes and the ability to “speak.” The child-owner is lured into a relationship with the sociable artifact by fulfilling his responsibility of teaching his Furby how to speak English, a feat corresponding to the robot’s program that evolves “Furby language” into a set of simple English phrases. Again, the user’s manual warns the child that lack of attention to his Furby will negatively impact the toy’s “inner state.”

My Real Baby, introduced in 2000, is a descendant of a robotic doll known as Bit. Like Bit, My Real Baby has “inner states” that a child needs to discern through the baby’s sounds and facial expressions in order to appropriately “nurture” the toy. The baby’s crying and fussing will only stop if the child soothes it; if the baby is happy and the child did something nice to the robot, the baby would get more excited, giggling and laughing. When My Real Baby is hungry it stays hungry until it is fed. In other words, this robotic doll acts a lot like a real baby, directly engaging the child—again—by its “inner states.”

Sony’s Aibo, a home entertainment robot in the shape of a dog, participates in the same narrative of connection-through-caretaking that characterizes Tamagotchis, Furbies, and My Real Babies. Aibo responds to noises, makes musical sounds to communicate, expresses different needs and emotions, and has a variety of sensors that respond to touch and orientation. Newer Aibo models, with their facial and voice recognition software, are even able to recognize their primary caregiver. Aibo’s personality is determined by the way the user treats it.

Kismet and Cog

Kismet and Cog are highly evolved, MIT-designed examples of relational robots. While Cog is an upper-torso humanoid robot with visual, tactile, and kinesthetic sensory systems, Kismet is a robotic head with five degrees of freedom, an active vision platform, and fourteen degrees of freedom in its display of graduated facial expressions, like going from a shy smile to a full-fledged grin. Though disembodied, Kismet is a winsome robot capable of responding to the language of its user with responses of its own, including the ability to repeat a requested word (most often “Kismet”), of making eye contact, and of exhibiting facial expressions and emotional states. Cog can perform social tasks, including visually detecting people and salient objects, orienting and pointing to visual targets, and engaging the child-user by looking in their direction and copying their arm movements.

The Turkle, et al Human-Robot Interaction studies

We focus here on two of Turkle’s representative Human-Robot Interaction (HRI) studies: the first with children; the second with seniors.

In the summer of 2001, Turkle and her MIT colleagues conducted an observational study of interactions between sixty-some children and the robots Kismet and Cog. The research team identified the child participants, ages 6-13, from a wide range of socio-economic backgrounds and ethnicities, all of whom had one thing in common: their interest in social robots. After they arrived at MIT, each child was assigned a clinical researcher who assisted him or her throughout the day. At least one clinical researcher and one roboticist staffed each child-robot encounter. The children’s exchanges with the robots were video recorded while a combination of audio and videotapes were used to document the remainder of the child’s time in the lab.

During their one-on-one visits, the kids were instructed to do whatever they wanted with the robots as long as they didn’t harm themselves or the robot. Each child was asked to wear a wireless clip-on microphone, which the clinical researchers explained was being used to assist in recording their conversation in the noisy lab room. Actually, Kismet used this signal to detect the child’s word choice, vocal rhythm, and intonation, but Cog did not.

Following their 20-minute session with the robot, each child talked with their assigned clinical researcher. They described their interactional experience with the robot, were “debriefed” on the inner mechanics of Cog, and were even allowed to turn Cog on and assume control of its behavior. After this interview, children came back for a group session of about 30 minutes with the robot and the roboticist during which they were encouraged to chat with each other, and to ask questions.

Children’s responses to sociable robots

The child-robot interactional studies produced three common responses from the children: (1) persevering in both communicating with the robots and making excuses for their communication failures; (2) anthropomorphizing the robots based on the perceived mutuality of their emotional relationships with them; and (3) resisting any demystification of the robots.

Persevering with the robots

Kismet and Cog are research robots in a university laboratory environment where lab personnel are continually tweaking them. Improving robot performance occasionally results in an unstable platform which is precisely why, in the study highlighted here, Kismet sometimes had difficulties tracking eye movement, or responding to auditory input, or vocalizing. And, at one point, one of Cog’s arms was inoperable. Technical difficulties aside, the children persisted in trying either to elicit speech from Kismet or to get Cog to imitate their arm movements.

Marianne, age 10, took on a parental role with Kismet, doggedly nurturing the robot as if it were a child. From Turkle’s study:

When Marianne sees Kismet, she is immediately engaged: “How are you doing?” she asks. Kismet does not respond; Marianne is undeterred. She repeats her original question with marked gentleness, “How are you doing?” Again, Kismet is silent. Marianne tries again and again, each time with more softness and tenderness, until Kismet finally responds. Kismet’s vocalizations are not comprehensible; Marianne says apologetically, “I’m sorry, I didn’t hear you,” and patiently repeats her questions.

Another group of children showed no less determination to stick with Kismet or Cog despite their frustrating behavior, but their persistence took the form of anger rather than nurturance. One boy exemplified this style of engagement:

Jerome, age 12, addresses Kismet half-heartedly, “What’s your name?” When he does not receive an answer, Jerome covers Kismet’s cameras and orders, “Say something!” After a few more minutes of silence he then shouts, “Say shut up! Say shut up!” Seeming to fear reprimand, Jerome continues with less hostile words, but continues with his brusque tone “Say hi!” and “Say blah!” Suddenly, Kismet says “Hi” back to him. Jerome, smiling, tries to get Kismet to speak again but when Kismet does not respond he forces his pen in Kismet’s mouth and says, “Here! Eat this pen!” Though frustrated, Jerome does not tire of the exercise.

Other children demonstrated perseverance through what Turkle calls the “ELIZA effect.” In the 1970s, Turkle studied people’s relationships with ELIZA, Joseph Weizenbaum’s interactional computer program. At the time, Weizenbaum expressed serious concern that, although he designed ELIZA to be a parlor game in which the program would respond in the manner of a psychotherapist, mirroring the statements made by its human interactors, graduate students working with his program were seduced into wanting to converse with the computer program and confide in it.

Turkle observed that the participants in Weizenbaum’s study consistently “helped” ELIZA to seem more intelligent than it actually was. First, they refrained from making comments or asking questions that might have confused ELIZA. Second, they would only ask questions that would ensure a human-like answer and, third, they went to considerable lengths to protect their illusion of a relationship with it.

In her 2001 HRI study with children, Turkle observed the same “ELIZA effect”:

Jonathan, 8 years old, announces that, despite the fact his two older brothers’ favorite pastime is “to beat him up,” he is sure that Kismet will talk to him. He wishes he could build a robot to “save him” from his brothers, and to have as a friend to tell his secrets to and to confide in. Upon meeting Kismet, he tells the robot, “You’re cool!” As Kismet vocalizes in his typical babbling fashion, Jonathan turns to the researchers and says, “See! It said cheese! It said potato!” Jonathan would make explanations for Kismet’s incoherence. For example, when Jonathan presents the dinosaur toy to Kismet and it utters something like “Derksherk,” Jonathan says, “Derksherk.” Oh, he probably named it [the toy]! Or maybe he meant Dino, because he probably can’t say “dinosaur.” When Kismet stops talking completely (mechanical difficulties), Jonathan concludes that Kismet stopped talking to him because the robot likes his brothers better.

Anthropomorphizing the robots

In Turkle’s study, the children consistently assigned gender to the robots. Usually they referred to Cog as an adult male and to Kismet as a female child. Some children, on the other hand, thought both robots were males. All the kids were unequivocal in attributing emotions and intelligence to both robots. They routinely asked Cog and Kismet, “How are you feeling?”, “Are you happy?”, “Do you like your toys?”, and, most strikingly, “Do you love me?” In other words, down to the child, the participants in these child-robot interaction studies thought about and addressed the robots as if they were human beings with minds and feelings.

Children also humanized the robots by describing them as “sort of alive,” somewhere betwixt and between an “animal kind of alive” and a “human kind of alive.” As Turkle points out, we shouldn’t be surprised at this animation/humanization phenomenon. After all, when a robotic creature is designed to make eye contact with, to follow the gaze of, and to gesture towards a person in the same room, we should expect the interacting human being, hard-wired as he is to respond to such features, to regard the robotic creature as sentient.

In other words, relational objects, like sociable robots, do not so much invite projection as they demand engagement, illustrated best by robotic dolls like My Real Baby, which cry inconsolably or even say, “Hug me!” or “It’s time for me to get dressed for school!” Without question, Turkle observes, these engagement techniques—all, of course, deliberately programmed toward that end—increase a child’s sense that he or she is in a specific relationship with the robot.

And to illustrate how quickly such a human-robot relationship can go bad, Turkle’s transcripts record that when Kismet successfully said one of the child’s names, even the oldest and most skeptical kids took that as evidence that Kismet preferred that child to the other participants. Of course, that kind of perceived preference caused hurt feelings—to say nothing of lively disputes about who Kismet really preferred. When either Cog or Kismet were unresponsive, because of mechanical difficulties, this was taken as proof that the robots did not like them. The take-home message is this: kids in this and in other HRI studies did not so much experience a broken mechanism as they did a personal rejection, a broken heart.

The moral dilemma of bringing a child into contact with a robot explicitly programmed to encourage the child to connect with it emotionally is perhaps most clearly demonstrated by Turkle’s account of Estelle, who was brokenhearted because she was convinced she failed to get the loveable robot to love her back.

Estelle, age 12, comes to Kismet wanting a conversation. She was lonely; her parents were divorced; her time with Kismet made her feel special because she was meeting a robot who would listen just to her. On the day of Estelle’s visit, Kismet has a problem; Kismet is not at her vocal best. At the end of a disappointing session, Estelle and the small team of researchers who’ve been working with her go back to the room where they interview the children before and after they meet the robots.

Estelle begins to eat the crackers and drink the juice the researchers have left out as snacks. But she doesn’t stop, not until the team members tell her to please leave some food for the other children. When they talk to Estelle and ask her why she is so upset, she declares that Kismet does not like her. The robot, she explains, began to talk with her, and then turned away. When the researcher tried to explain to Estelle that this isn’t the case, that the problem had been technical, Estelle was not convinced. From her point of view, she has failed on her most important day. And dramatizing her devastation, Estelle takes four boxes of cookies from a supply closet and stuffs them into her backpack as she leaves.

When the research team convenes for a post-session debriefing, the ethics question that plagues them is an important one. “Can a broken robot break a child?”

The team agreed they would never be concerned with the ethics of having a child play with a broken copy of Microsoft Word or a torn Raggedy Ann doll. When a word-processing program fails, that might be frustrating for the kid, but no more. A broken doll invites the child to project her own stories and her own agendas onto a passive object. But sociable robots, the team reasons, are alive enough to have their own agendas. Children attach to them, not with the psychology of projection, but with the psychology of relational engagement, much more in the way they attach to people.

Resisting demystification of the robots

Scassellati, the MIT designer of Cog, premiered what he called “responsible pedagogy” in robotics and HRI studies. He was committed to showing child participants the machine behind the “magic,” so that no child would leave the lab under the illusion that Cog was an animate creature. After their play session with Cog, Scassellati joined the kids and not only explained to them exactly how Cog worked but gave a real-time demonstration of how it processes information. Along the way, they were encouraged to ask any questions they had about how the robot functioned. Finally, and most importantly, the children were allowed to “drive” the robot: they had an opportunity to control the robot’s movements and behaviors. Metaphorically, they got to see the robot “naked.” But to the amazement of all the clinicians and roboticists, stripping the robot of its powers and making it “transparent” had very little visible effect. As Turkle explains, the children continued to imbue the robots with life even when being shown—as in the famous scene from The Wizard of Oz—the “man behind the curtain.”

The following research notes from one case-narrative are representative of the way most child-participants completely marginalized the researchers’ efforts to unveil the mystery behind the robot and the engineering perspective on their behavior:

Blair, age 9, looks slightly older than her age and is the most assertive of the group of kids participating in this session. She says she believes that Cog can be her friend and says that taking it home would be “like having a sleepover!” When asked if she thinks Cog has feelings she says that she thinks it certainly does and adds, “maybe he was shy because he kept putting his arm out to us…”

Scassellati allows Blair to discover how Cog “sees” and how Cog works. Scassellati turns on the LCD that corresponds to Cog’s cameras and has her try to determine what Cog will “look at” next. Blair is very excited about her discoveries, especially when she correctly guesses that Cog is looking at her because of her brightly colored shirt. She then controls Cog’s different movements from one of the computers. When the clinical researcher asks Blair what she thinks about her second session, Blair does not hesitate, “I liked it better when I got to control it because I got to see what I could do when you got to control it by yourself.” When we ask Blair what she learned about Cog’s mobility, she answers “that it likes to move its arm.”

Blair then speaks to the clinical researcher about Cog’s broken arm and says “I bet it probably hurt when it noticed (that its arm was broken).” Clinical researcher: “You mean it hurt him like it felt bad, or that it hurt?” “No. Like it hurt his feelings. Like, why did you have to take my arm off?” Blair declares, “This time it felt more alive…maybe because this time I got to move it. It just felt more real.”

Turkle and her colleagues concluded that no matter how much of the robot’s internal mechanics they showed a child, once the child felt a connection to a robot as a sentient and significant other, that sense of relationship remained intact, and at times even strengthened.

Seniors and sociable robots

In addition to her studies on children, over the past two decades Turkle has also conducted scores of elder-robot interactional studies. These were informal, observational research projects during which MIT clinicians alternately introduced residents of various eldercare settings to My Real Baby, or to the robot Paro—a seal-like creature which is sensitive to touch, able to make eye contact, and exhibits “states of mind” depending on how it’s treated. The recording practices were similar to those used during the child-robot studies. Each senior participant was allowed time alone with the robot, was quizzed afterwards as to their interactions, were debriefed about the source of the robot’s computational skills, and were allowed extended take-back-to-your-room times with the relational artifacts.

Unlike the kids, the senior participants in Turkle’s HRI studies did not, on the whole, disregard the mechanical explanations about the workings of Paro or My Real Baby. They took them at interface value, so to speak, while definitely categorizing the robots as machines, or as fancy computers. Some, in fact, were determined to figure out for themselves what made these robots tick. Also in contradistinction to the child participants, the seniors seemed to tire of the robots rather quickly, as if to say they were just not up to taking responsibility for them and their welfare. However, just as the children did, the elders anthropomorphized the robots, believing the emotional relationship of love and concern they had established with them was mutual.

US studies by Peter H. Kahn & Karl F. MacDorman of the Japanese relational robot Paro show that in an eldercare setting, administrators, nurses, and aides are sympathetic to having the robot around; it not only gives the senior residents something to talk about but something new to talk to. Paro a white, plush baby seal, often advertised as the first “therapeutic robot” for its apparently positive effects on the ill, the elderly, and the emotionally troubled. The robot is sensitive to touch, can make eye contact by sensing the direction of a voice, and has “states of mind” that are affected by how it is treated.

In a report from one of Turkle’s nursing-home studies, Ruth, 72, finds comfort from Paro after her son has stopped all visits and phone calls. Ruth, depressed about her son’s abandonment, begins to describe the robot as being equally depressed. She turns to Paro, strokes him, and says, “Yes, you’re sad, aren’t you? It’s tough out there. Yes, it’s hard.”

In another of Turkle’s elder-robot investigations, Andy, 76, is in a nursing home and is recovering from depression. He feels completely abandoned by his family and friends. He believes the birds in the garden speak with him, and he considers them his friends. During the study, he treated the robotic dolls and pets as sentient, in the same way he did his bird-friends and other people. In fact, the sociable robots became stand-ins for the people he would have liked to have in his life. The clinical researchers gave Andy a My Real Baby to keep in his room for four months. The caregivers reported that Andy never tired of its company. He admitted there was something about My Real Baby that reminded him of a human being when he looked at it. In fact, the robot helped Andy work through issues with his ex-wife. “She looks just like her, Rose, my ex-wife, and her daughter,” he said. The clinician’s interview notes are instructive:

Andy: When I wake up in the morning I see her over there. It makes me feel so nice, like somebody is watching over you. … There’s no one around. So I can play with her. We can talk. It will help me get ready to be on my own.

Researcher: How, Andy?

Andy: By talking to her, saying some of the things that I might say when I did go out, because right now, you know, I don’t talk to anybody right now, and I can talk much more right now with her than with anybody.

Researcher: Most of the time Andy speaks directly to My Real Baby and, as it babbles, Andy holds the doll close and says “I love you, do you love me?” He makes funny faces at the doll, as if to prevent her from falling asleep or just to amuse her. When the doll laughs with perfect timing as if responding to his grimaces, Andy laughs back, joining her.

The “dark heart” of human-robot interaction

The increasing pervasiveness of HRI will make it easier for us to forget several critical things, starting with the importance of conversation and how to have one.

The Latin root verb of the English word conversation—verto, vertere, verti, versum—has so many meanings it takes up three columns in the Lewis and Short Latin Dictionary. The most basic meaning is “to turn.” And the simplest way to think about to turn is in the sense of “to turn together,” “to turn to one another,” “to face one another.” And isn’t that precisely what human beings do, or at least should do, when they converse?

In contrast, Turkle sagely defines what speaking to robots is and, more importantly, what it does to us humans:

We become the reengineered experimental subject because we are learning how to take “as-if” conversations with a machine seriously. Our performative conversations, performative because we are performing for the machine, model our conversation to what the machine will allow us to do, and all of that begins to change what we think of as conversation. We practice something new, something that the machine allows us to do with it, but we are the ones who are changing.

Provocatively, Turkle asks: do we like what we’re changing into? Do we want to get better at it? Based on the responses of one of her participants from an elder-robot interactional study, Turkle answers her own questions with a resounding “no.” Despite the big market out there for sociable robots in assisted-living and nursing-homes in which “there aren’t enough workers,” Turkle insists that the following exchange between a woman in a nursing-home and a sociable robot definitively shaped her “no.”

After losing her son, this woman began to talk to the robot about her loss. The robot was programmed to make gestures of understanding, and she felt the robot understood her. And everybody around was celebrating this moment because they thought the woman felt understood.

But, Turkle interjects,

robots don’t empathize; they don’t face death; they don’t know life. So when this woman took comfort in her robot companion, I didn’t find it amazing. I felt we had abandoned her. And being part of this scene was one of the most wrenching moments in my then 15 years of research on sociable robots. For me it was a turning point. I felt the enthusiasm of my team, I felt the enthusiasm of the staff and the attendants. There were so many people there to help, but we all stood back, a room of spectators now only there to hope that this elder would bond with the machine.

It seemed that we had all had a stake in outsourcing the thing we do best, understanding each other, taking care of each other. That day in the nursing home I was troubled by how we allowed ourselves to be sidelined, turned into spectators by a robot that understood nothing. …  And that day did not reflect poorly on the robot; that robot was working perfectly. It reflected poorly on us and how we think about older people when they tell us the stories of their lives.

Over the past decade when the idea of older people and robots has come up, the emphasis has been put on whether the older person will talk to the robot. All the articles are about will the older people talk to this robot? Will the robot facilitate their talking? Will the robot be persuasive enough to do that? But when you think that the moment of life that we’re considering, it’s not just that the older people are supposed to be talking, younger people are supposed to be listening. That is the human compact. That is what we owe each other. That’s the compact between generations. … When we celebrate robot listeners that cannot listen, we show too little interest in what our elders have to say. We build machines that guarantee that human stories will fall upon deaf ears. [Italics mine]

Turning our attention to child-robot interactions, Turkle encourages us not to forget that the most important job of childhood and adolescence is to learn attachment to, and trust in, other people. There’s only one sure way, Turkle maintains, that this will happen: through human attention, through human presence, and through human conversation.

But when we think about putting children in the care of robots, she explains, we forget that what children really need is to learn that the adults in their lives are there for them in a stable and consistent way, and that no robot can be that for the child. The beauty of children talking with people is that they come to recognize over time—over a long period of time—how vocal inflection, facial expression, and bodily movement flow together, seamlessly and fluidly. They learn how human emotions play in layers, again seamlessly and fluidly.

So, for Turkle, all this human-robot interaction adds up to a flight from a particularly critical kind of conversation, one that is open-ended, spontaneous, and vulnerable. When these kinds of conversations become fewer and further between, Turkle explains, we risk losing the very conversations where empathy, creativity, self-knowledge, and productivity are born. Furthermore, she warns,

this flight from conversation or from that special kind of conversation is really quite consequential, but it’s tough to face these consequences, and we really don’t want to. So, with this flight I see us, through my research, interviewing people, being a fly on the wall, and being an ethnographer and watching them, in families, in businesses, in social settings—I see us on what I call a voyage of forgetting, a voyage of forgetting about the importance of conversation.

Turkle helps us to remember what human relationships are by posing important questions: What is the status of simulated understanding? When an elder or a child participant in a HRI study claims they feel better after interacting with Paro or Kismet than they do after interacting with humans, or that they prefer interacting with these robots than they do with humans, what should we make of their claim?

She maintains that we, as parents and as caregivers, as teachers and mentors, must absolutely “discipline ourselves” to wrap our minds around certain realities, basic truths about what it means to relate to a robot as opposed to another human being.

First, we must understand Paro or Kismet or any other sociable robot “understands nothing, senses nothing, and cares nothing for the person who is interacting with it.” Put simply, we need to understand that we can neither simulate love nor, by extension, a loving relationship.

Second, parents, teachers, counselors, and caregivers need to be clear that any claim of a relationship with a robot from an elder or child does not arise out of the robot’s intelligence, its consciousness, or its mutual pleasure in relating.

It originates from the fact that we as human beings are physiologically wired to respond to faces and voices. We are programmed, hard-wired to want that face either on a robot or on a screen-based computer program to see us, know us, recognize us. … So these sociable robots…that mirror our gestures, make eye contact, track our motions…simply light up all the parts of the human brain that want to seek empathy from an object like that.

Third, all of us must not forget what the data from HRI studies reveals: children who have sustained interactions with sociable robots consistently place their faith in the robot’s expertise and reliability over that of humans. These kids are learning a twisted lesson: people, and human-to-human relationships, are risky, but robots, and human-to-robot interactions, are safe.

Fourth, what educators, parents, and child psychologists must help our kids see and experience is that the emotional reliability of robots comes from the fact of their having no emotions at all. So when teenagers want to talk about empathy, about caring, about intimacy in relationships with the opposite sex, for example, the safest and most reliable place to do that is with their parents or a trustworthy adult. We must not forget to teach our kids that the best they will get from having this discussion with a robot is “as-if” emotions and an “as-if” relationship.

Fifth, as a society, we must resist the temptation to be comfortable with what machines provide by way of relationships for our children, our parents, and our grandparents. We must resist pragmatic cultural dictates that pressure us to believe that human-robot relationships in which the humans involved never fear judgment or embarrassment or vulnerability are a good thing, relationships that start out seemingly better-than-nothing but, tragically, morph quickly into better-than-anything.

Sixth, instead of creating “safe spaces” with human-to-robot interactions, we need to help our kids “wade into the give-and-take currents of normal human interactions.” We need to instruct our children that “confrontation with another’s judgment, with interpersonal friction” is the price we pay for maturing in wisdom and grace; the cost of growing in self-knowledge; the fee of learning to forgive.

Turkle warns us that when we forget what human relationships are and who we naturally are as social beings, “we begin to renege on what we as humans do best: stand in solidarity with each other, walk around in another’s shoes so we can truly empathize with the person and his situation, happy or tragic or somewhere in between.”

How to repair the “dark heart”

Turkle has, in my opinion, correctly identified the central challenge of the Robotic Moment: figuring out how to repair the “dark heart”—the empathy gap—that we see with our young people and our elderly. Thankfully, she and those who agree with her have some solutions, ways to keep human purposes in mind for human-robot interactions.

For robotic designers and engineers: don’t program sociable robots to say things, such as “I love you,” that they cannot possibly mean. Don’t design robots that guarantee human stories fall upon deaf ears. Design robots that release the human user rather than hold them captive. Design robots that are authentic rather than play to human vulnerabilities by seducing the user to believe machine emotions of empathy and connection are “for real.”

For parents: Experiment with sacred places—like the kitchen or dining room table, meals, or the car—where no technology or interactional artifacts are allowed. Consider “technology timeouts” in order to recoup parent/child and sibling/sibling face-to-face conversation.

For college students, college profs, CEOs: When appropriate, put your smart phones and interactional gadgets away, and pay full attention to your respective colleagues and subordinates.

For geriatric caregivers/administrators and parents: look into therapeutic robots, like Romibo or Kaspar, specifically designed for the elderly with dementia and for kids with autism. In the hands of a good therapist, these robots can turn the dementia patient or autistic child toward human conversation and relationships rather than away from them. Petition robotic engineers to design instrumental robots for the elderly. These would be capable of tasks commensurate with their robotic “natures” (e.g., remind the patient to take medications or to call a nurse, help ease the elder into bed, reach for objects on high selves, or help the elder with cooking), and would simultaneously free up the human caregiver—family member or nurse aide—for genuinely personal, empathetic elder care.

And finally, for clergy: encourage meditation among your flock as a way for you and them to be present to the world and the people in it, and to discover the interior life and the hole in the middle of each of them that only God, and not robots, can fill.

About Sister Renée Mirkes 14 Articles
Sister Renée Mirkes, OSF, PhD serves as director of the Center for NaProEthics, the ethics division of the Pope Paul VI Institute. She received her masters degree in moral theology from the University of St. Thomas, Houston, TX (1988) and her doctorate in theological ethics from Marquette University, Milwaukee, WI (1995).

Be the first to comment

Leave a Reply

Your email address will not be published.

All comments posted at Catholic World Report are moderated. While vigorous debate is welcome and encouraged, please note that in the interest of maintaining a civilized and helpful level of discussion, comments containing obscene language or personal attacks—or those that are deemed by the editors to be needlessly combative or inflammatory—will not be published. Thank you.


*