I Think It’s Past Your Bedtime; Friction and Humanity in AI

An Alexa device suggesting "I think it's past your bed time"

 

 

What if your AI assistant sometimes refused to do things, and you had to negotiate? The omnipresent AI assistant of today is helpful in the way that an overeager intern might be: never complaining, putting on a smile while walking up 12 floors to bring you your 3rd latte of the day, all that. The model is attempting to be the most helpful and unobtrusive person imaginable. But that might not be the best person to have in your life. The eternal go-fer is an enabler, a masochist, and pretty bad company. The Alexas and Siris of our world are personified, but not fully read as person. They’re in some limbo where their engineers and researchers recognize that speech will always be recognized by humans as being human, so they’re given names and distinct voices, but where also we can not and should not expect human things from them like personal requests and idiosyncrasies. 

But there is something human about them nevertheless, and it’s something we’re dying to see and to know. In July 2019 a woman used her Alexa device to call 911 during a domestic assault. The technology definitely saved her life, but the story also twisted as it spread around social media. The state of the narrative when it made it to my mom’s Facebook page was not that the victim had yelled for Alexa, but rather that Alexa had realized what was happening and decided to intervene. There are surely a number of reasons why this was a popular bit of misinformation, but one I’m attached to

In a recent workshop with Paul Pangaro, professor and researcher at HCII in Carnegie Mellon, we were tasked to document a normal conversation about getting lunch, then to analyze the speech before attempting to recreate it using a language processor. I found myself stumbling around trying to think of why a user of this AI would ask the sort of questions I asked my friend. Brian asked me, “Are you still not eating gluten?” — a reasonable thing for an AI to ask if scheduling your lunch. I reply “still off bread, yeah. Are you still a vegetarian?” A useless question to ask Siri (and which of us should get to be a real person, for that matter?). Siri will probably never need to be asked that sort of question, but it does expose a gap in its function. There is currently a sort of immutable power dynamic in any conversation with AI, where the user knows the AI has no wants or needs, and engages with a frictionless service-bot wearing a human mask. Frictionless isn’t always better — I’m wondering about the reception of an AI assistant who has wants and needs, who even asks things of the user. It might be no less helpful, and might provide more dynamic conversation. At the very least it might work to resolve our current dissonance around what to expect or hope for from our speech devices.