Beyond the Bounds - June 1, 2019
Leave a comment

CUI: I-Hear-You-I and Speak-You-I

By Hannah Faub


The focus of the studio class this semester was different kinds of machine learning. Before working through various studies, we read journals and articles, listened to podcasts, and watched youtube videos. One of the sources that the class viewed was AIGA Designer 2025. They explored seven trends that focused on communication design. We read a piece by Kate Darling that discussed anthropomorphizing robotic technology in specific contexts. She found that people have a tendency to project life-like qualities onto robots and found that giving the robot a name impacts how people perceive and treat a robot (Darling, 2015).


One of the studies that got my attention in studio class was the exploration of Conversation User Interfaces (CUIs). This study challenged me to think beyond the bounds of the interaction of machine learning and humans. I was intrigued by finding the silver lining between machine and human relationships since we live in a world where humans are so intertwined with technology that it can take away from face-to-face interactions. I wanted to find a tool that the user could tap to wake the system, offload the user’s mind (talk) while it listened, gathered the information, and organized it on an application that the user could go back to later and view. A tool that could keep the users’ interest but also help with daily tasks or things people are not always capable of handling—all the while feeling more human-like when someone talks to it at the same time.


We began working through a provocative study that prompted me to think critically about our relationship with conversational interfaces. This study, combined with the debate around CUI, made me curious about the potential opportunities of CUIs. Can we build a new relationship between humans and machines via CUIs? We began thinking about what the CUI could potentially do, such as creating a new kind of relationship between humans and machines.

Study One—Build a bot

The first study my partner and I built a chatbot. It consisted of creating an assistant with three personality traits. We brainstormed questions and responses the bot could answer (Figure 1). We then created a scenario where the user would approach the bot for a specific reason and converse with the bot. The final step was creating a diagram of the conversational flow.

We came up with traits such as contemplative, efficient, and proactive. To put it in context, we created a scenario for the users: “You recently lost your job. You need help finding another one. This chatbot will take you through an interview process that will assist in helping you find a job. It will ask you a series of questions at random—some will be serious, others will be hilarious, and they may even seem a little absurd.”

Figure 1. Flowchart of chatbot conversation pattern.

Study Two—Research

The second study focused on the voice input and UI feedback of the system. Individually I studied existing systems like Siri and Alexa by asking them questions and then sketching ideas and conversation flow between a virtual assistant and a human using a mobile device. It became the startup of plans for the diagram of states. As I continued to develop possible visual systems for a CUI, I considered six different directions (Figure 2) that showcased a series of screens that indicated: the user speaking, a dialogue that is occurring, when the system succeeds, when the system fails, cues for sending text input, and the switch between text and voice.

Figure 2. Six conversation explorations.

I explored even further the idea of movement as the user was speaking. I was intrigued by the idea to design a piece that feels more humanistic when you talk to it and began iterating forms that could act as different and recognizable facial expressions. This process entailed my exploration of how the system would indicate that the user was talking, how dialogue occurs between two entities (the human and the machine), and when a system succeeds and fails. I also considered text input and cues for sending text input and the switch between text and voice (Figure 3).

Figure 3. Explored diagram of states.

Study Three—Deliverable     

The third study focuses on how the user would engage in a conversation using the CUI portrayed as a prototype based off of the diagram of states. For the end, I created a prototype slideshow that demonstrates how the user would engage in a conversation using the conversational interface including a scenario that reflects how a human cannot sleep because they have a lot on their mind that they want to offload including a way to view later (Figure 4).  With this technology, people can speak their mind and then go to sleep knowing that nothing will be forgotten later.

Figure 4. The connection between a product and an application.

Moving Forward

I found my niche of a part of the design I want to explore further. I have a passion for communicating with others, both listening and speaking. I believe that the ability to connect and have a good conversation with someone is the foundation of building a relationship. This study made me think about how, as a designer, I can create a tool that helps families and friends better communicate with each other—taking into consideration that we live in a world of screens, and face-to-face interaction is dwindling.

Working on study three, I realized that I want to support those who may not be able to speak what they feel and help them find their voice. Therefore, with the help of machine learning, I would like to continue research and discussion with CUI in the future. Specifically, I want to look at four different variables—connection, vulnerability, empathy, and relationship building. Using these four variables, I want to design a tool that can help facilitate a conversation between family and/or friends.

In study one, it was challenging to design a robot that had humanistic qualities. I was challenged to think beyond and find a way to hear the person’s authentic voice. People hide behind screens, so I am questioning how we can use the screens to our advantage; I believe that with the help of CUI we can build better relationships and connections with one another that have the potential to be more “real” than they are currently.

Hannah Faub (MGD ‘20)  is a Master of Graphic Design Candidate at North Carolina State University. She is interested in computer user interfaces and user experience.


Leave a Reply