Viewing | Yuval Noah Harari, Why Fascism is so Tempting—And How Your Data Could Power It.

Reading | Mimi Onuoha and Mother Cyborg’s A People’s Guide to AI

Viewing | The Algorithms Behind Stitch Fix

In Harari’s talk, we hear him warn of a future where data is controlled and sequestered by the government and the world devolves into a dictatorship. This bleak outlook is reiterated in Onuoha’s Guide to AI, where she discusses the issues around inequity when people are left out. Both narratives point to reasons why designers have a role to play in how our everyday products and services are consumed by our users. Whether that be through transparent storytelling, elimination of dark patterns, or through the distribution of data, awareness is key for avoiding these damaging futures.

So now that we know the strategies and steps that we must take to prevent the misuse of this technology, my question is how can we as designers encourage the questioning of our products and services while they are in use? Can we find a way to encourage user engagement beyond the surface level and give users the power to ask the critical questions? Or is that our job alone?

Viewing | The Algorithms Behind Stitch Fix

Reading | Big Data by Helen Armstrong

Stitch Fix was able to frame their use of algorithms in an interactive storytelling environment. This format provides a comprehensive view of all the ways this company combines algorithms with rich user data to satisfy their customer’s needs. Their goal is to use this technology and combine it with human sensibilities to create moments of delight. This idea that a computer can monitor your behavior and predict what you might like before you even know it is an amazing feat, however, at what point do these decisions cause harm to our decision-making abilities. In Armstrong’s book, she gives an example of how algorithms are trained to personalize the data you see or even filter out certain information like spam from emails. In this case, are we might be saved from annoying emails but at the same time losing our ability to delineate between good and bad emails. So in an attempt to satisfy the needs of a user, are we unintentionally causing them harm? Are we stifling a user’s intuition and ability to articulate their likes and dislikes or identify what is relevant and what isn’t? The question I pose is how will we allow for choice and flexibility in the face of AI technology that promotes efficiency and delight?