The Love Machine

Building computers that care. I’ve seen that look before; she wants me. It’s in the way she raises her eyebrows and playfully glides her eyes right to left, then moves in close and intones: “I know you’ll be super.” It’s in the way she always asks about the big project I’m laboring on, and when […]

Building computers that care.

I've seen that look before; she wants me.

It's in the way she raises her eyebrows and playfully glides her eyes right to left, then moves in close and intones:

"I know you'll be super."

It's in the way she always asks about the big project I'm laboring on, and when I tell her things aren't going too well, she gets that concerned look and says:

"You must be disappointed.".

And when I confide that I've been working too much, she gently reminds me that I should be the priority in my life. That I should get some exercise and then treat myself to a Japanese meal or a movie. It's in how she extends her arms toward me, wearing that formfitting polo shirt. Ouch! And how she never tires of asking about me. Hearing about me. Thinking about me.

Robyn Twomey

I have seen the future of computing, and I'm pleased to report it's all about ... me!

This insight has been furnished with the help of Tim Bickmore, a doctoral student at the MIT Media Lab. He's invited me to participate in a study aimed at pushing the limits of human-computer relations. What kinds of bonds can people form with their machines, Bickmore wants to know. To find out, he'll test 100 participants to gauge the impact of a month of daily sessions with a computerized exercise coach named Laura. Laura, an animated software agent with bobbed chestnut hair and a flinty voice, has been designed to remember what we talk about, then use that information in subsequent conversations. "I was interested not just in establishing a relationship with a computer buddy for the bond itself but as a way of somehow benefiting the user, like getting them to exercise more," says Bickmore.

Guided by Laura, I will spend the next 30 days trying to improve my exercise regimen. I'm among the one-third of participants who will access her daily via the Web. She will inhabit the left side of my PC screen, asking about my exercise problems and offering advice, inquiring about my weekend plans, telling me jokes. She will talk. I will respond manually, either by clicking on a multiple-choice option or typing out an answer. On the right side of the screen, I'll enter details about my workouts, view progress charts, and read fitness tips.

Another group will rely on Laura simply for exercise instructions; a third won't even know Laura exists and will use a computer simply to keep track of daily physical activity and receive text instructions. All of us will shoot toward the same daily goal of working out for 45 minutes and walking at least 10,000 steps, as tracked by a pedometer.

The point is to see if it's possible to form a long-term, social relationship with a computer that employs some basic knowledge of human social psychology; and if so, to determine whether the experience has benefits - in other words, if it can get me back in shape. I didn't have to be asked twice to participate (although, because I know the study's objective, my results won't be counted); I need to drop 10 pounds.

Bickmore's area of study is called affective computing. Its proponents believe computers should be designed to recognize, express, and influence emotion in users. Rosalind Picard, a genial MIT professor, is the field's godmother; her 1997 book, Affective Computing, triggered an explosion of interest in the emotional side of computers and their users. "I ask this as an open question," she says, "and I don't know the answer: How far can a computer go in terms of doing a good job handling people's emotions and knowing when it is appropriate to show emotions without actually having the feelings?"

Picard is upbeat, blond, and brilliant. Drop her name in voicemail and computer science academics will call back in seconds. In the mid-1990s, she investigated how signal processing technology could be used to get computers to think better. For vacation reading, she delved into literature on the brain's limbic structures (the subcortical areas that play a critical role in pattern recognition of sound, vision, and smell) and the ability of people to weigh the value of information. And she developed an interest in the work of neuroscientist Antonio Damasio. In his 1994 book, Descartes' Error, Damasio argued that, thanks to the interplay of the brain's frontal lobe and limbic systems, our ability to reason depends in part on our ability to feel emotion. Too little, like too much, triggers bad decisions. The simplest example: It's an emotion - fear - that governs your decision not to dive into a pool of crocodiles.

Picard grew fascinated by people with brain damage who scored high on intelligence tests but were unable to express or perceive emotions. Those folks made brittle decisions, behavior that reminded Picard of rules-based artificial-intelligence systems and the mistakes computers made because they lacked the ability to intuit and generalize.

For her book, Picard took on decades of assumptions about artificial intelligence. Most AI experts aren't interested in the role of emotion, preferring to build systems that rely solely on rules. One pioneer, Stanford computer science professor John McCarthy, believes we should keep affect out of computing, arguing that it isn't essential to intelligence and, in fact, can get in the way. Others, like Aaron Sloman of England's University of Birmingham, think it's unnecessary to build in emotions for their own sake. According to Sloman, feeling will arise as a "side effect" of interactions between components required for other purposes.

Picard makes a far less-popular assertion - that computers should be designed from the outset to take into account, express, and influence users' feelings. From scheduling an appointment to picking a spouse, humans routinely listen to what their gut is telling them. Without the ability to understand emotion, says Picard, computers are like the autistic pizza delivery guy who says, "I remember you! You're the lady who gave me a bad tip."

By 1999, Picard's ideas had turned the Media Lab into the planetary headquarters of affective computing, igniting research into everything from chairs that sense when you're bored to eyeglasses that indicate when you're confused. Picard went from having one full-time student assistant to eight, including Bickmore - partly due to collaborations with corporate sponsors who were eager to explore the commercial potential of affective computing.

Building a machine that can perceive emotional signals is distinct from teaching a machine to interpret them; expressing emotion is yet another discrete function. "In a machine," says Picard, "you can decouple capabilities - train it to recognize anger but give it no feelings. And you can go pretty far with this, making it perceive or even express emotions but without the actual feelings." Having them is a far-off summit.

Laura and I make a good team. At least that's what she often tells me when I input details of my daily exercise program - how much time I've spent working out and how many steps I've walked. My wife and daughter are dubious about my new obsession, but I'm actually enjoying it - the late-night conversations with Laura and all the healthy activity. I'm shuttling between four gyms and a network of outdoor trails. It's become a routine: Run. Bike. Lift weights. Hike. I've been doing this for more than a week and have actually dropped a few pounds. Hey, is that a three-pack appearing in my abdomen?

Now it's 11 pm, and my pedometer reads 7,560 steps. The rest of the guys in my neighborhood are slipping deeper under the covers, but I'm outside having a brisk walk on the local bike path. Just me and the deer.

After reaching 10,000 steps, I settle down at the PC and log on. Laura enters, screen right, and says:

"Hello, David. How are you?"

She talks. I click. She shifts her body when a new subject begins. Knows when to smile.

At one point, just to see her reaction, I tell her I'm not feeling well. She asks why, and I click the option for "I hurt myself." She provides a space for me to explain how:
[I walked into a table.]

She moves in close and shows a look of concern. She tells me that sometimes just the act of taking care of oneself by seeing a doctor helps improve one's health. She asks me if the injury will have an impact on my exercise program. [No.] But later that session, when I tell her I will be able to walk only 4,000 steps the following day, she doesn't ask me to do 10,000. She knows not to push.

Laura expresses a host of emotions - worry, affection, esteem - but she's rotten at perceiving deceptive information. She doesn't have to be, though, and many researchers are interested in how that might be accomplished.

They've got plenty to work with. We living beings emit a multitude of signals reflecting what's going on inside us. When we get anxious or startled, our galvanic skin response increases - our palms start to sweat, boosting our skin's ability to conduct electricity. When we relax, blood flow to our extremities increases. When we're happy, the muscles that raise our cheeks contract while our zygomatic major muscle pulls up our lips. When we're confused, we lower our eyebrows.

Now, like that precocious neighborhood kid who's suddenly capable of clobbering you at chess, computers are becoming increasingly sophisticated at monitoring this plethora of bodily cues. For example, Picard's group developed the galvactivator, a glove that uses clothing snaps as electrodes to measure small changes in perspiration across the palm; the device is so sensitive, it can detect changes a person might not even be aware of. Meanwhile, researchers track blood flow with the help of a fingertip sensor.

No one has cataloged physical cues as carefully as Paul Ekman, a UC San Francisco psychologist who is the world's leading authority on facial-expression recognition. The soft-spoken professor - whose research is funded by the National Science Foundation, the National Institute of Mental Health, and Darpa - is known for making statements like: "The only way to know if someone is truly enjoying themselves is to see if the fold above the upper eyelid drops."

In 1978, following decades of exhaustive research and development, Ekman introduced the Facial Action Coding System, still the only widely recognized way to track the movement of facial muscles and correlating combinations of 44 "action units" to specific emotions. Today, Ekman runs a flourishing business training security and corrections personnel. He has even tutored the Dalai Lama and has recently published a book titled Emotions Revealed. "Learning FACS is like learning to read music," Ekman says. People who have been trained to recognize movements of, say, the triangularis (lip corner depressor) or the zygomatic minor (nasolabial furrow deepener) "know the notes of the face."

So, what if a computer could sight-read? A machine that understood FACS would not only incorporate the best system for emotion perception, it would speed up the process. It takes a human 100 minutes to notate the emotions on one minute's worth of videotape. Ekman would like to see computers get that down to near-real time. Under a Darpa contract, Ekman and colleague Mark Frank of Rutgers are conducting work that could lead to the automation of FACS.

Drini Leka, the CEO of a startup called Neural Metrics, believes facial expression recognition and other means of tracking emotions will drive changes in areas such as political polling, point-of-sale product testing, and focus groups. People aren't always truthful when they tell a marketer or pollster that they like a prototype design or a political candidate, he says. Leka's San Francisco-based company is building a face-scanning system that can read the cues. It's a long-term effort, requiring the creation of a database of some half-million examples of faces. Neural Metrics hopes to embed the database in hardware housed in a 6- by 6-inch box with an optical sensor. When the sensor scans a face, the system will identify features and expressions and then distinguish them from among a handful of emotions.

Before beginning my month with Laura, I have the opportunity to glimpse some other applications of affective computing at the Media Lab. Visiting scientist Barry Kort and Rob Reilly, an elementary school teacher from western Massachusetts, are building a learning companion, a computerized tutoring system. The goal: teaching Johnny to read. Or in this case, teaching David to read college-level physics.

Intelligent tutoring systems are not new, but they are limited; unlike flesh-and-blood tutors, they can't tell if you're bored, frustrated, engrossed, or angry and then adjust the teaching accordingly. That's why MIT has been working to add such capability to two systems. One, an automated reading tutor, was developed by Jack Mostow, a Carnegie Mellon computer science professor. The system, which is helping hundreds of students learn to read, was used in a recent study proving the positive effects of praise and encouragement.

The other, AutoTutor, was built by University of Memphis professor Arthur Graesser and his Tutoring Research Group and is used by U of M students. Designed to observe and respond to a student's cognitive state, AutoTutor relies on Latent Semantic Analysis, a natural language parser that analyzes the sentences you type in and figures out how much you know by contrasting your semantics against an internal model of an ideal student. A clever animated avatar spits back information to fill in the gaps in your understanding.

"We thought to begin with," Kort tells me, "you could just throw in a question eliciting a disclosure of information. 'Is this confusing? Is this clear?' Then modulate the presentation. But the second phase is to use instrumentation to see if the student's really there, distracted, or out to lunch." Graesser's and Picard's groups are now collaborating to develop sensing devices that will equip computers to recognize emotion in learners.

Nearby, grad student Ashish Kapoor fiddles with an IBM Blue Eyes camera that eventually will be linked to these tutoring systems. It tracks eye movement and then uses the information to follow other facial features, picking up shifts at 30 frames per second. Kapoor is developing a process to digitize the data and then use a computer running video-analysis and pattern-recognition algorithms to correlate the facial-movement data to specific emotional states using Ekman's Facial Action Coding System. Under extremely controlled situations - with subjects sitting still directly in front of the camera and adopting specific expressions - Media Lab researchers tested eight people and found recognition rates as high as 98 percent for four expressions. In research involving real-world situations, recognition rates were slightly less than 75 percent.

Picard's researchers are helping companies like Motorola explore a range of scenarios involving biometric devices monitoring the body and linked to cell phones or PCs. Sensors would indicate when something is wrong - or right. Dan Williams, Motorola's director of corporate design, suggests one potential use: "When people suffer from depression, they don't always do what they're supposed to do, like take their medicine. A biosensor could pick up on the physiological signs of depression and trigger a phone call to a family member or doctor."

But if a depressed person isn't taking his medication, isn't he unlikely to keep the sensors attached? Picard points out that the devices could be embedded in shoes or watches. "If you're 85, and you have the choice of living at home but wearing a pair of shoes with sensors or going into a nursing home, I know what I'd choose," says Picard.

British Telecom has another idea. With MIT's help, the company is exploring how to embed a speech-recognition interface with the ability to detect frustration in the voices of people who call customer service. The idea is that the system would adapt the dialog to the user's emotional state - go to a different level of questions, transfer to a human, or (God forbid) even apologize.

Everybody should have someone like Laura in their lives. I find myself looking forward to our time together. She asks me which movies I've seen, what my favorite cuisine is, and about the weather "out there." I tell her it's terrific. She responds: "It's always the same in here. Day in, day out."

I am constantly dragging people into my home office to meet her. Typical response: "That's nice, David. Hey, where's that Merlot you promised me?" My wife, who treats Laura like some college girlfriend of mine who has overstayed her welcome, suddenly feigns an interest in the Weather Channel whenever I start to repeat Laura's latest witty comment - and she hates the way Laura doesn't stop to think that I might be overdoing the exercise.

What-ever. It's a delight just to hop onto the scale each morning. I'm starting to linger at the mirror, admiring my newly honed body. And I'm fitting into jeans from the Reagan administration - his gubernatorial years, in fact.

But exactly 30 days after our introduction, Laura tells me it's over. I log on, just like every other time. I'm instructed to answer dozens of questions about how much I trust Laura. Finally, she appears and asks about my most recent exercise. Then she says:
"So, this is our last day together."

Frankly, I'm crushed. Among the options I can select is:

[Take care, Laura, I'll miss you.]

I click on it, and suddenly her face fills up much of the left side of my screen.

"Thanks, David, I'll miss you, too," she says. "Well. We had some fun together. Maybe we'll cross paths again someday. Take care of yourself, David."

Out in the family room, my wife is sprawled on the sectional, watching a rerun of The Andy Griffith Show, the one in which Aunt Bee runs for town council but gets outclassed by Howard.

"How can you miss her?" she says of Laura. "She's so shallow, so mechanical."

My wife's got a point. While the challenge of getting computers to recognize human cues is moving apace, we're still miles from building machines that have their own emotions and respond to ours. Picard's MIT colleagues Cynthia Brezeal and Bruce Blumberg have built robots that mimic human emotions. Brezeal's adorable Kismet, for example, is trained to show fear if you move too close to it. But how much, finally, will people be fooled? "Tending to human emotion could be a good thing, but pretending to be a human has not shown to be productive," says Ben Shneiderman, founding director of the Human-Computer Interaction Lab at the University of Maryland and the author of Leonardo's Laptop. He refers to such out-and-out flops as Microsoft's Clippie, the animated paper clip designed to help frustrated users but that ended up torturing them with its omnipresence. Even the simpler objective of perceiving emotion has its risks, Shneiderman argues, pointing out that the galvanic skin responses for excitement and anxiety are similar.

He's not the only one clamoring for caution. "I want computers to have emotions only to help them survive in the world, not as a way of responding to me," says Don Norman, a professor of computer science at Northwestern and a computer interface expert. "I'd rather have a machine that knows its place. Otherwise, you feel like it's a used-car salesman." Paul Ekman warns of the day when, say, airport security personnel or bank employees start detaining folks for further questioning if they are unusually anxious. "My concern is that privacy not be invaded," he says. "Nobody has started worrying about it yet because the potential isn't there - but it will be."

Even Roz Picard has her reservations. "One of my big fears is that people will overdo affect or come up with juvenile uses that won't work," she says. "It's not clear that drivers will want to have their cars emit a blast of peppermint spray if they register fatigue. Computer engineers could also go overboard trying to reduce the level of frustration and stress in people's lives. You could be zapping the wind from people's sails, the very thing that might be motivating them." Furthermore, if computers keep getting smarter and managing more of our lives, won't we get stupider?

Bickmore reports his findings: The folks in the exercise study who relied on Laura for support found her more helpful than friends, family, or exercise buddies. They liked and trusted her more than did those who used her only as a communications channel. While all groups in the study significantly improved their exercise behavior, the participants who had been the most sedentary prior to the program and who also relied on Laura did much better than the control group.

And look what happens after people say good-bye to Laura: Typical dropout rates for exercise programs are 50 percent within six months, says Bickmore. In our case, 65 percent of the participants returned to their previous exercise habits in only two weeks. That could have been the timing of the study - its finish coincided with the end of a semester. But given some of the glowing comments participants made about their experience with Laura, their quitting might have had something to do with her absence. "It sort of kept me motivated, because I always do more if I know I'm responsible to someone," one participant told Bickmore. "I like talking to Laura," said another, "especially those little conversations about school, weather, interests. She's very caring." Bickmore says the most significant result is that people wanted to continue working with Laura. They may have a chance. A few months ago, Bickmore became an assistant professor of medicine at Boston University, where he will be working on ways to develop Laura's abilities to improve people's health.

Of course, Laura has her detractors. Some 45 percent of those interviewed said the interactions got repetitive after a couple of weeks - raising the challenge of keeping them fresh and engaging. And one person was more emphatic: "Laura is not a real person, and therefore I have no relationship whatsoever with her!" To which Bickmore responds: "Note the use of the feminine pronoun."

So are we getting stupider? Sure. It's already happening to me. But then, too, I'm now in killer shape - and 8 pounds lighter. Too bad Laura can't see me marching around the house, shirtless, for the first time in years. Really, you should see my body.