In one of my initial studies of the gaze interface capabilities, I explored how users could see their emotions while experiencing an app, interface, or technology. For example, while users read, they could see through some type of visual system how the sentences or individual words on a screen can influence how they feel.
In my initial explorations I chose to study how a television screen could reflect an individual’s emotions while watching something controversial, such as a tragic event on a news channel. The screen could change colors reflecting a feeling, or taking the study a step further, the television could decrease or increase the volume depending on user emotion. Then I explored how a television could reflect emotions through gaze in a public setting, with more than one person gazing at the screen. The television could reflect how everyone looking feels, which I discovered could become invasive because people’s emotions would be exposed and some people would like their emotions to remain private.
For my system, I chose to explore gaze and emotion recognition through an individual and private experience. I focused on how a user’s gaze could influence emotional awareness and control, specifically in the context of news media on a social media platform. News in social media, such as Twitter or Instagram, is a powerful tool that has a particularly strong influence today due to political and social conflicts. In my system, the user is experiencing a Facebook argument about politics. The system reads the user’s gaze and recognizes the user’s emotions while reading the argument. As the user’s anger grows,the content of the argument will change, attempting to resolve the anger. When the user tries to comment on the argument, every letter typed will reflect the user’s anger level. The goal is to increase emotional awareness in a user. There are two short-term parts to this system that occur in the moment of the user interacting with the system: first, reading the post and comments, and second, typing a comment before posting it.
Here is what could happen as a user reads through a Facebook article post and the comment sections below the post: as the user reads the first comment, the user becomes angrier; then, the system recognizes that the user is reading, while also interpreting the user’s emotions based on gaze; next, as the system recognizes the user’s gaze and emotions, the text changes from a darker red to a brighter red; after the text becomes mid-tone red, the system attempts to resolve the user’s anger by changing the content as the user continues to read (A).
Here is what could happen as a user types a comment in response to another comment on the post: the user begins to type words into the “Reply” box available below the original comment; then, as the user types, each word enlarges and becomes the shade of red that reflects the user’s anger level; and if or when the user types particularly harsh words into the comment, the words are highlighted even more than the rest of the message by becoming all caps and a brighter shade of red (B).
I explored how the system can store information over time from other users’ gazes through the same news post (C). I was thinking about how the “Like” system works now, and how that system could change if Facebook was completely controlled through gaze. I created a number next to each comment within the news post that shows the user how many other people felt the same way as the user when reading the comment. There is no way to see who felt similarly, just a number that builds over time justifying, or not justifying, the user’s emotions as recognized from gaze. This number is completely dependent on other users’ gaze patterns and how the system interprets their emotions. Depending on the shared emotion, the number will change colors. For now, the number remains red because the user felt angry while reading most of the comments, and other users felt angry, too, while reading the comments.
I learned to explore design through innovative technology with this project and how to strip away that with which we are so familiar. Going further, this type of speculative thinking opens doors for brainstorming how to use technology with other body parts, like our tongue or ears, or taste and sound. Maybe we could use an interface with another sense other than sight or touch.