Mobility experience. How to create Alexa auto skills

Drivers typically use 2-3 apps while driving, but would use more if the use case was relevant. Therefore, we must focus on personalization and the driving experience, on the trip, in all the activities that happen on the road when we drive and that are specific to those contexts such as parking. We present a Mobility experience. LetMePark for Alexa

Voice replaces sight and touch

Inside the car, developments have been made, focused on the control of the car’s functions and some other services such as navigation. On the other hand, there are also applications developed for mobile phones, which can now be displayed in the car, through Apple car or Android auto, but they have not been designed for use «in the car» and overlap with navigation, the radio or the entertainment that we have on.

Let’s do an exercise. Please, take your mobile and put it on your legs, hands in front and I want that, without looking at the phone, you send me a whatsapp.

Well, it would be something like this, “Ok Google, send a whatsapp to Enrique from LetMePark”.

That is what happens to people when they are driving, where the interaction is totally different.

So far the dominant user interface (web and apps) is visual. It involves visual navigation and response, and it requires your attention. The eyes are needed, and usually the hands.

Everybody is working on the UX, how are the buttons on the screen, etc. but now, another sense replaces sight and touch for certain actions. Voice can be a hands-free experience.

The user can pay attention to the voice interaction, but their eyes and hands could be occupied in another way, for example, driving. In this way, we say that road safety is improved since distractions at the wheel cause 20% of traffic accidents.

In Car LetMePark for Alexa

LetMePark will search, reserve the parking space, if you prefer, and pay for the parking automatically. LetMePark uses the GPS location of your device, to offer the closest parking to you and navigates you to it.

What is mobility experience

When we develop an application, the visual interface is programmatic, if the function is not on the screen it cannot be done, however, with the voice interface there are no limits in terms of inputs. People can say anything.

A voice interface in the car is not only achieved with the development of a skill, this is where the data and behavior of the driver also appear, and that is why we talk about the “mobility experience” so that the driver, doing what he has to do, achieves their goal, in this case centered on the parking lot, but it can also be for a car recharge or gas stations, etc.

There is a lot of information and guides on how to create a skill, but what is the problem?

It is a guided conversation where the fields you need for the world of mobility do not exist.

There is no field for addresses, a date and time field that works in German and Spanish. It operates in a scenario where searches have great variability and we say the address of a street, a point of interest (for example, the national auditorium) or the intersection of two streets, etc., They are not fields that are entered into the system.

In an application, when you press a button it is very difficult to activate another button, but how many times do we talk to the machines and they understand something else? When we speak it is not so simple.

Voice and data are related

Sometimes the sound is not clean, it does not pick up the words well, that is why at LetMePark we have a cleaning system and we also work when the system is wrong to see what biases it produces.

No matter what technology you rely on, the system lets you choose Spanish, but the accents are different from Andalusian to Catalan or Latin (“Alexa, tell me where to park”, “Alexa I want to park”).

Of course, much of the work is done by the Google and Amazon recognition systems, but excellence lies in giving value to the driver, contextualizing and cleaning up these errors.

From each request that is made, we know what comes in and what responds, we count everything and since we know when it is a failure, we turn it into feed back to improve the system.

Here we do not have to talk about the voice and then data. They are related.

It is not that LetMePark is better in voice because we have studied the phrases or questions that we have to ask, but because we see that people do not speak the way we want.

We are not predicting how people are going to speak, but to measuring and correcting. But we automate that correction system with artificial intelligence techniques, so that when there are more users, more mistakes, more countries that we open or whatever, that correction process is done.

ACTIVATE SKILLS