Design as fool, Fool as design

In Sherry Turkle’s Reclaiming Conversation she states, “we declare computers as intelligent if they can fool us into thinking they are people.” Turkle is referencing the performative emotion machines act out when communicating with humans—every act is a meaningless simulation for the machine, but the human can start to feel an emotional bond with it. This concept of foolery is not confined to smart technology, but also designers. Is it really the technology that is fooling us or the way that the technology was designed? While reading Design for Dasein by Thomas Wendt, I read a passage on the history of the term “designer” and how Vilém Flusser traced its roots to deceit and trickery. Design is a relatively modern term coming into higher use after leaving a craft-focused society. Broken down, it can be interpreted as de-signifying something. Flusser argues design is demystifying the gap between art and tools/tech. 

Tim Ingold spoke of this idea saying, “We are fooled into supposing that chairs afford the possibility to sit down, when it is the chair that dictates that we should sit rather than, say, squat…the designer is a trickster. Far from striving after perfection, his field is in the management of imperfection.” When thinking of foolery with data and technology, we not only need to think about the strict role that technology plays in context to humans, but also the role of design in this process. Designers are presenting visuals and experiences to the world, through technology, that are not real. Many of our experiences are designed—formulated and sculpted to be what the designer wanted it to be. Not only should we be asking what is technology’s role in the human experience, but what is the designer’s role in the human experience? Are designers to stay the puppeteers that fool others with technology? 

Designer’s frame technical problems in terms of technology as an ethical matter, but do not think of the designer in the same light. The types of questions we ask each other are usually, “Should a robot be able to pretend to provide therapeutic conversations with people?” and are rarely framed as, “Should we, as designers, allow our skillsets to be used to manipulate people into thinking they’re having a true connection to a machine?” It’s almost as though we’re placing the blame on the technology’s shortcoming of “human-ness” even though we, as humans, designed it. What are other meaningful ways designers can design technology that does not try to fool a human into trusting a machine as they would another human? It’s very easy for a designer to twist their failure, very ironically, into anthropomorphizing the technology’s failure. “The technology isn’t human-enough” is sometimes talked about as “our design of the technology isn’t human-enough,” but rarely is it “Should I, a designer, be contributing to falsely humanizing a machine?” Designers are paid to do a job, and we do it usually, without raising deep level concerns. Should the job security of a designer continue to be based on the fact that we create whatever is wanted in the most “user-friendly way”, or that designers raise genuine concerns to the way in which proposed technology could be problematic? It’s a role switch of where the true value of a designer lies.