The rising popularity of Alexa, along with other popular voice-powered assistants like Apple's Siri and Microsoft's Cortana, shows us that human-computer interaction is no longer solely dependent on a bright screen. As a result UX designers are shifting their focus to a more expansive approach that considers all of a user's senses and provides even brighter opportunities for delivering the perfect experience.
While exploring this expanding playground for UX designers, I recently published a conversation with Alexa on Medium. This conversation appears here along with my notes from a UX perspective. Enjoy!
It's a delight to have you around. I am still getting used to you though. I guess the feeling is mutual! "Remind me to grab a bag of tomatoes when I go grocery shopping tomorrow, will you?"
Alexa and future voice-powered virtual assistants will be designed to act much like a UX designer. It will listen to the user and anticipate her wants and needs, often before the user is even aware of what those desires actually are.
I like the way you read out my favorite book to me, but I was wondering how to get you to read another to me. Maybe you love the same books as I do? Perhaps. We'll have to figure that out soon. ...Well, we actually did figure it out! Wish you could recommend something from my list. I can't seem to remember all the books I have, and—believe me—it's a long list.
This conversation reflects a user's desire to establish and experience a deeper bond with technology. Future voice interfaces will likely evolve to take on distinct personas, with specific pitches, tones, vocabulary and personalities tailored to each user's wants and needs across a variety of contexts and scenarios.
I wish you could sense from my tone, what I might or should listen to. While you were happily playing all the short jazz samples one after the other, I was wondering how to let you know that I wanted you to stop and play the one I really liked.
Every user's needs are complex and vary based on the context. To deliver an amazing user experience, technology should be designed to appropriately respond to each of these scenarios.
Future voice interfaces will analyze the user's emotional cues, such as pitch and intonation, to deliver a desired experience at any given time.
|Future voice interfaces will use pitch and intonation to deliver a UX that meets a user's emotional needs.|
According to a recent article in MIT Technology Review, Amazon is already developing natural language processing updates for Alexa. These changes will allow the virtual assistant to anticipate a user's feelings and then deliver a corresponding experience that aligns with those emotions.
Before I ask you another joke today, I would really appreciate if you could tell my 10-year-old a nice bedtime story. Am I asking for too much? Hmm. I need to get some of those smart lights to see how you go about that, of course. ...You are welcome to sit in that corner but don't be so quiet; you can utter a few words off and on and maybe we could become the best of friends...someday. Although you did fire up while I was chatting to my brother on the phone. I don't know what prompted that! You started telling a joke, of all the things! LOL! BTW, thanks for reading out the recipe, it has turned out quite good!
While Alexa's functionality isn't perfect, her jokes are no accident. The jokes intentionally give Alexa a personality that is relatable and invites a deeper bond with the user-akin to a friendship. By designing a personality for Alexa, Amazon has delivered a strong branded experience. Future technology designed with a personality that engages multiple senses will establish a richer bond with a user and win greater mindshare for a brand. With the overcrowding of technology in our lives, meeting the needs of our multiple senses won't be too much to ask. It will be expected.
Up for some chai?