After experiencing Helen Armstrong’s Machine Learning studio, I can’t stop seeing the potential of AI and ML in everything. What if ML could brew my morning cup of coffee, predicting the amount of times I’ll snooze my alarm before I begrudgingly crawl out of bed on any set weekday? What if ML could tell me what kind of mattress I should buy based on my sleeping positions and cycle patterns? What if ML could probe me to check on a friend after assessing our contact frequency or by analyzing the tone of our text messages?
The crazy thing is, it can… and my musings are not novice. Someone likely has attempted to create interfaces that allow ML to deploy these capabilities.
Understanding ML makes me feel like I’m living in a sci-fi movie, one where Helen has slipped a red pill in my Hydroflask, revealing the true aptitude of machine learning — all of its pearls and warts surfacing to reflect the human experience. We must polish its pearls and laser its warts, but how? How do we rid ML of its (very human) biases and hoist its (imperceptible to human) pattern-detecting capabilities?
Humans, by design, are biased. We’ve programed our own prejudices into algorithms and have failed to amend unfairness in ML. We’ve embedded ourselves in the AI.
So, when I ask myself, what’s in an AI? I’m inclined to say: we are.