I’ve never been one to use Alexa or Siri to their full capabilities. I remember when Siri first was released, I completely turned it off for a while. Now, I keep it on for specific tasks that I’ve found it’s helpful for. Now, after learning about conversational user interfaces throughout the past few weeks, I’ve been able to look back and analyze my dislike and discomfort with these interfaces.
I think one of the biggest struggles with speech-based CUI’s is the finicky game that is played between the human and the computer. While reciting a list, for example, it was common for me to be half-way-through, stop for a few seconds, and then continue. That small pause would really throw off the computer and usually create a separate list. Another problem-space we’ve discussed is the wide-range of responses that the computer needs to account for. With increasing machine learning this will get better and more accurate; although for many platforms it remains an issue.
This game within a task truly limited my interaction with any sort of CUI. I’m interested to see what approaches are taken to combat the inability for a computer to truly replicate a human conversation. Maybe with more time and human-to-computer conversations the platforms will eventually just adapt to our nuances. Counter to that, humans may just get used to the way in which computers receive information and we’ll adjust to them.