This is a crosspost from the 2022 New Information Environments course, taught by Helen Armstrong. In that course we post weekly, responding to two readings and attempting to raise questions about them. We were requested to port one of those blog posts over to &So, edited or unedited. With the exception of a couple spelling mistakes (it’s easy to get reckless when you have a limited audience) I’m leaving the body text unchanged. I don’t think this post is my most insightful but I’m choosing this one because it marked a little bit of a sea change for me in how I approach course blog posts, mostly in that I was trying to have fun with it. I think that writing, and design, should have a sense of humor. And when I’m taking everything seriously, and trying really hard to be correct, I write and design poorly.
This opens with a slight polemic on a Kate Darling chapter about relationships with AI. I never read the other chapters in this book, but I’m sure she does get to all the nuance I complain about. Her work is excellent, but when talking about AI futures I often feel like I’m dismantling a bomb. The Van Allen piece is great and you should take a look at all of of his work — it makes me feel a little more optimistic.
Kate Darling – A New Category of Relationship
I always feel like a real big naysayer in these blog posts. I hope that isn’t because it’s easier to be deconstructive than constructive, and I also hope it isn’t because of the fact that I am deeply rattled by anxiety toward what the future holds. But both are true. This article was filled with sentimental stories of people whose lives have been or could be changed for the better by AI-powered machines. And I was moved by them! That’s the thing: with AI and a lot of other emergent tech I really do believe in its capacity to improve and change lives. But I don’t believe that a comparison to WW1 soldiers and their horses is apt. I understand Darling’s temptation to draw comparison with our current military and their bomb defusal bots, but there are fewer consequences to be seen for the widespread use of horse technology in the home.
This is not “the new cat.” This is something that is and will be programmed by private companies. If it’s the new cat, it’s a cat with Henry Ford’s ideology who is sending an interior map of your home to Jeffrey Bezos. It’s not that I don’t trust AI, it’s that I don’t trust other Jeffs. Particularly those among us Jeffs who are the richest in the world.
I love those suffering from dementia forming a loving and healing parasocial bond with a stuffed seal. I don’t love lonely 18 year-old men forming parasocial relationships with chatbots trained on reddit forums and being radicalized into bizarre and dangerous ideologies. I think often of, and I’m sure I’ve mentioned this a lot because I think of it often, the research student who developed the technology the US military uses to track human targets via drone bomb. She had no idea that what she was developing would be used that way, but presumably accepted a few grants with extensive terms and conditions. In a disastrous twitter thread she said she thought it might be used by filmmakers for tracking shots. When I feel like a naysayer I remember her.
Phillip Van Allen – Animistic Design
This is definitely the direction I hope that AI takes. I think that our Siris and Alexas are working hard to be our primary source of information, and it worries me. I do use Siri a lot, and while I don’t necessarily need too many conflicting opinions on how long to broil a salmon I understand that as AI plays a more prevalent role in our lives, we need ways to communicate that AI is fallible.
Having multiple AI weigh in on a given scenario or question would communicate that well. I don’t think that having a “good” and “bad” AI would be sufficient, and I’m sure the researchers ultimately don’t think that either. If it were me, I would only listen to the good one. But to have AI which were nuanced, had familiarity with different and randomized sources or topics (the way humans are) would create a sort of community. “You know, I don’t really like Siri. I don’t consult with her often,” I’d say, “But when I’m looking at making a large purchase she’s way better than Alexa at breaking down the ROI.” And my friend (played by Keanu Reeves) says “That’s a great point, Jeff. I wouldn’t use her to get info on the moon landing though, she’s got some weird ideas about that.”