Drivers typically use 2-3 apps while driving, but would use more if the use case was relevant. Therefore, we must focus on personalization and the driving experience, on the trip, in all the activities that happen on the road when we drive and that are specific to those contexts such as parking. We present a Mobility experience. LetMePark for Alexa
Voice replaces sight and touch
Inside the car, developments have been made, focused on the control of the car’s functions and some other services such as navigation. On the other hand, there are also applications developed for mobile phones, which can now be displayed in the car, through Apple car or Android auto, but they have not been designed for use «in the car» and overlap with navigation, the radio or the entertainment that we have on.
Let’s do an exercise. Please, take your mobile and put it on your legs, hands in front and I want that, without looking at the phone, you send me a whatsapp.
Well, it would be something like this, “Ok Google, send a whatsapp to Enrique from LetMePark”.
That is what happens to people when they are driving, where the interaction is totally different.
So far the dominant user interface (web and apps) is visual. It involves visual navigation and response, and it requires your attention. The eyes are needed, and usually the hands.
Everybody is working on the UX, how are the buttons on the screen, etc. but now, another sense replaces sight and touch for certain actions. Voice can be a hands-free experience.
The user can pay attention to the voice interaction, but their eyes and hands could be occupied in another way, for example, driving. In this way, we say that road safety is improved since distractions at the wheel cause 20% of traffic accidents.
In Car LetMePark for Alexa
LetMePark will search, reserve the parking space, if you prefer, and pay for the parking automatically. LetMePark uses the GPS location of your device, to offer the closest parking to you and navigates you to it.