Affective Computing and Adaptive Help

Several months ago, I saw Dr. Rosalind Picard give a talk on Affective Computing. I took notes and thought a lot about what she said but let my thoughts fester rather than follow up on them. Then last week, I read Emotional Design by Donald A. Norman, which reminded me of Dr. Picard’s work and my initial thoughts about affective computing. There are two elements to affective computing:

Websites and applications are personalized by tracking your browsing history, collecting advertising preferences, device usage, and demographic data. Using affective computing, they could soon be personalized by tracking your emotions.

Help tailored to you

The great challenge of online or in-product help is providing help you want, when you need it and just-in-time help, offering guidance before you get frustrated with the experience. This is commonly referred to as adaptive content or adaptive help. Using advances in affective computing, adaptive help could be customized not only based on your knowledge of a product, what device you’re using, and where you are located, but also based on the emotions you experience as you use the product. By identifying capability personas, it’s easier to understand what types of help certain types of users might look for, and what kind of help they might need.

As a member of the confident autonomy group, you might look for suggestions about what types of actions to learn next, while as a methodical novice, you might want to know about process improvements or how to expand your use of the tool. Knowing when to introduce that help is also key, and this is where the inclusion of affective computing could be vital. You might want help right when they start to get frustrated, indicated by your forehead wrinkling in irritation. Or perhaps if you click through the beginning of a process repeatedly, but never complete it, looking confused all the while. Successfully addressing these scenarios is challenging. It’s hard to understand and anticipate the “why” behind someone’s emotions and actions. We can’t know what is frustrating someone. We can’t always find out what is stopping someone from completing a process.

A second iteration of Clippy?

Microsoft tried to anticipate the “why” behind people’s actions in a rudimentary and now-notorious implementation named “Clippy,” designed to respond when it noticed certain patterns in the Microsoft Office. Start a paragraph with “Dear Susan,” and Clippy would appear and ask if you were writing a letter, and if so, could it be of any help? The capability now exists to make a Clippy-like tool far smarter using machine learning and affective computing. This tool could take in a large amount of data before making suggestions.

No longer as simple as recognizing a structure of “Dear Susan,” the tool could make note of how many times an account has written Dear Susan before. For people who write letters often, the tool could “learn” that they probably don’t need assistance writing a letter. For someone who has never written “Dear Susan” before, the tool could take that into account before offering advice.

Using affective computing, the tool could monitor your facial expressions, or sync up with a wearable device, to assess your emotions as you engage with a product. If you’re bored, excited, or confused, Bindy could offer varying suggestions. Truly adaptive help could be responsive to each of these signals, changing the help message accordingly. Rather than ask if you needs help writing the letter, the tool could ask instead if you need help writing the letter faster, by pointing to some templates. the tool could also suggest help for sending a letter to multiple people with mail merge tips. If the name matches someone in the user’s contact lists, the tool could automatically suggest (a la Crystal) ways to better target their audience.

Surveilling you to help you

The obvious challenge in this day and age is that the technology required to make this work requires near-pervasive tracking—a usually unwelcome act of surveillance. Software to observe and monitor your emotions, actions that you perform in a product and on a device, that integrates with your contacts and friends lists would require you to place a lot of trust in a product or service and its ability to protect your privacy.

However, this isn’t too far off from what currently exists in many of our devices. As I draft this in Google Docs, my writing history is tracked. Google knows exactly how many times I have (or have not) written “Dear Susan” and similar formulations. Google also has access to my contact lists, data on how often I contact them, and a convenient way to set up a video feed of my face integrated in Google Hangouts and Google Mail. The new element of the tracking you to provide you with a personalized experience is affective computing. Currently, the most effective form of affective computing requires a webcam or another way to track your facial expressions. Other types of affective computing rely on voice-based analysis. But as wearable devices become more common, and integrate heart rate monitors and the movement of your wrists as you type, the potential for the integration of affective computing into a daily workflow becomes more realistic and subtle than a constant video feed of your face.

Adaptive help won’t stop butting in

In addition to the necessary surveillance, the biggest challenge to this entire concept is a challenge the Nielsen Norman Group identifies in all adaptive help and the reason why the original Clippy failed: “it appeared without the users’ direct request. Like pop-up windows, the animation surprised and annoyed users who hadn’t wanted help”. Clippy, or any potential replacement tool, would be sharing potentially helpful information in the least-receptive manner possible. Unasked-for, with no understanding of your thought processes, much like your parents when you were a teenager. Another risk is also one that strikes parents of teenagers, as the Nielsen Norman Group makes clear: ”If the system suggests topics that users aren’t interested in, they will quickly learn to disregard those suggestions.” These adaptive help systems would have to be finely attuned to their user’s help needs, and provide that information at just the right moment (or determine whether to provide it at all).

Challenges of “smarter” adaptive help

Scrunching up your face isn’t a request for help—it could be a sneeze (it’s allergy season) or it could be the frustration right before you make a triumphant leap in your knowledge and understanding. Even the best-designed adaptive help system, using machine learning and affective computing, faces a near-insurmountable challenge. As Donald A. Norman reminds us in Emotional Design, “The proper response to an emotion clearly depends upon the situation.” The proper response to frustration isn’t always immediate help, and understanding the cause of the frustration is essential. A few examples from Norman:

“If a student is frustrated because the information provided is not clear or intelligible, then knowing about the frustration is important to the instructor [in this case, the adaptive help interface/intelligence], who presumably can correct the problem through further explanation.”

Incorporating an adaptive help response at this point would also not fix the true issue. If a person has gotten far enough only to be frustrated by unintelligible information, there is likely a user experience or interface issue at fault. Offering help is not the solution for every problem.

“If the frustration is due to the complexity of the problem, then the proper response of a teacher might be to do nothing. It is normal and proper for students to become frustrated when attempting to solve problems slightly beyond their ability, or to do something that as never been done before.”

Providing a helping hand here could, in some cases, stunt your growth as you become more comfortable with a software product, as you begin to rely not on your own experience with the product but instead lean on the adaptive help to tell you what to do. Norman continues, “if students aren’t occasionally frustrated, it probably is a bad thing—it means they aren’t taking enough risks, they aren’t pushing themselves sufficiently.” In the case of people that fit the proficient novice capability persona, this could be an opportunity for the adaptive help to push a person toward new ideas, concepts, or functionality that they may not have tried before. Another point of Norman’s is relevant for this specific frustrated student too: “It probably is good to reassure frustrated students, to explain that some amount of frustration is appropriate and even necessary.” Not all frustration is a problem to be fixed. When providing adaptive help, understanding the cause of frustrations is essential. Affective computing could help here.

“if it goes on too long, however, the frustration can lead students to give up, to decide that the problem is above their ability. Here is where it is necessary to offer advice, tutorial explanations, or other guidance.”

Using available data to help understand when a user might want help is the best use case for adaptive help. Norman’s explanations of the different types of appropriate reactions to frustration offer many cases where it may be inappropriate to offer help. The appearance of our Clippy replacement tool, whether it takes the form of a cleverly designed pop-up or slide-in modal, will be summarily rejected, along with its suggestions, if it appears at an inappropriate time. In contrast with many of the software development goals in the industry, adaptive help doesn’t leave much room for iteration. User trust is not easily won back once lost.

Give adaptive help feelings

To get it right the first time, we need to give adaptive help some feelings. An essential element of emotional design, identified by Norman in Emotional Design, is that the system has to have its own emotions as well, and the right kind for the situation. Style guides emphasize voice and tone for writing, and the same is true for adaptive help. Our tool would need a personality. Clippy made us feel stupid and wasn’t very smart itself, yet it was easy to personify—it even had eyes and a mouth. An ideal replacement tool would have a personality, necessarily targeted at the emotions of its users. Systems built without emotions are often nevertheless personified, so it is better to design for the inevitability than to design an emotionless help-providing system that feels cold to its users.

Let users ask for help

If you could predict when Clippy’s replacement would appear, or know how to reliably summon it when you knew you needed help, the tool could be more helpful. Chat robots already allow you to reach out and ask a question, though their subsequent helpfulness is debatable. You can summon slackbot in chat with a keyword or phrase, or ask it a question directly. Occasionally, the help menu in a software application lets you ask for help. More often, asking for help in the help menu sends you to a forum, the homepage of the documentation site, or a list of not-quite-relevant yet “frequently” asked questions. Making the appearance of Clippy’s replacement predictable or summonable makes it a friendlier help experience. Less of an unpredictable black box—say, someone that shows up and walks into your house when they want to—and more of a good friend that shows up when you call them, but also sometimes just when you need them, whether you called them or not.

Building a replacement Clippy

Adaptive help to this degree seems almost untenable. To build a help system this responsive and involved, you’d probably be better off investing the money in more customer support staff, designers, and technical writers. Building this type of tool would be technologically involved. You’d need several different elements:

Building the actual help content that would power Clippy’s replacement would be even more challenging. Rather than writing content for the most common denominator, we would instead be writing for every possible denominator. While challenging, models exist for building adaptive content. And if done properly, adaptive content allows us to properly control the context that our content appears in. When we know the context, we can write properly understanding content.

The present state

An adaptive help tool like a replacement Clippy could be valuable, but simply not worth the investment. The risk of losing a person’s trust in an application or website, coupled with the complexity, surveillance, and gray areas involved in building such a tool make it impractical. However, each element that I outlined to structure a replacement Clippy exist in various application contexts. Machine learning is being used to drive anticipatory computing to tailor messages and notifications to you when you need them using data like your location, device, and how you use an application. For more on anticipatory computing, see How A Computer Can Anticipate Users' Needs (Without Driving Them Crazy) in Fast Company, by Paul Montoy-Wilson. Kristen V. Brown sheds some light on other applications of machine learning in her piece for Fusion, I tried three apps that claim to make you more likable—and am now addicted to one of them.

Affective computing is primarily being used to gauge the effectiveness of advertising campaigns, ensuring that ads will continue to grow more targeted. See Raffi Khatchadourian’s piece in The New Yorker, We Know How You Feel. Andrew McStay discusses the wearable component of affective computing in The Conversation, Soon smartwatches will listen to your body to work out how you’re feeling, and much of Rosalind Picard’s groundbreaking work in affective computing involves wearables as well.

Nathan Collins discusses voice analysis and affective computing in his piece for Pacific Standard, You Sound Sad, Human. Emotional design has had an entire book devoted to it by Donald A. Norman. Beth Dean discusses Facebook’s efforts to incorporate Emotional Intelligence in Design on Medium, while Neil Savage discusses the personalities of robots in Artificial Emotions for Nautilus. Andrew Wilkinson compliments slackbot in his piece Slack’s $2.8 Billion Dollar Secret Sauce on Medium.

Adaptive content is discussed on Firehead’s blog by Noz Urbina in What is adaptive content? and Kate Sherwin assesses the design and user experience of adaptive help interfaces for Nielsen Norman Group in Pop-ups and Adaptive Help Get a Refresh. Especially important to consider from a design and content perspective are Sara Wachter-Boettcher’s words on Everybody Hurts: Content for Kindness.