Inside the Minds of Siri, Alexa and Google Assistant
How Does a Virtual Assistant Work?
Welcome to What Is, a new column on dedicated to giving simple answers to complex things that affect the products around you. Got a question? Nothing’s off limits. Submit yours to firstname.lastname@example.org and we’ll do our best to get them answered.
What happens when you ask a virtual assistant a question? Like, how does it know the answer? The accuracy, speed and contextual abilities of Alexa, Google Assistant and Siri are all thanks to machine-learning algorithms and massive servers, owned by their respective parent companies (Apple, Amazon, and Google). So how is each similar, and what makes each different? Ironically, those are questions any of the prolific virtual assistants can answer.
So what happens?
Alexa, Siri and Google Assistant all work in similar ways, but there are protocol and data privacy intricacies involved when going from assistant to assistant. When you activate one of the assistants, your request is immediately packaged up and sent to the servers owned by the company of your respective device. This is why, if you have a poor network connection, your virtual assistant might be slow.
It’s worth noting virtual assistants may act differently depending on which device you’re using them on. For instance, asking Google Assistant a question on your Google Home, Pixel XL or iPhone won’t necessarily give you the same answers. Since Google Home doesn’t have a screen, it can’t show you photos of your cat (unless it’s synced to your Chromecast or Android TV).
Similarly, your Amazon Dash Wand can’t do everything your Echo does. They both use Alexa to fulfill requests, but the Dash Wand can’t play music, set timers or tell you about your day.
Also, the way you access these virtual assistants isn’t the same. For example, the Google Assistant app needs to be open on your iPhone to hear it’s wake word. Similarly, on Amazon Tap, Alexa is accessed via button press rather than voice prompt.
Upon arrival, the words and tone of your request are analyzed by an algorithm, which are then matched with a command that it thinks you asked. Essentially it’s saying “we’re eighty-five percent sure you asked this question,” which is why you don’t always get the answer you were looking for. If the algorithm isn’t certain enough, it may ask “did you mean blank?” and give you its best guess as to what you wanted. Or it could say “I’m sorry I can’t do that yet.”
While the algorithm is analyzing the question, your phone or smart speaker is trying to figure out if it can handle the command without needing information from the sever. For example, a request like “Can you pause the music?” is simple; if you want the assistant to translate a phrase from Italian to English, that’s more complicated. Assuming your request is more complicated than a local command, your device will communicate back to the server and continue cross-checking your question to make sure it knows what you’re asking.
Finally, your request is answered. If you asked your assistant for the answer to a trivial question, it will find that answer from the web (Google, Bing, etc.) and push the response back to your device. If you asked your Echo to turn on your smart lights, a signal will be sent to them via wi-fi, and they will turn on. The complexity is in relation to the speed of task fulfillment and understanding what you want on the first try. Once the assistant knows what it needs to do, that’s a basic process of tapping into a server, third-party computer, or other electronic device.
How Are These Virtual Assistants Different?
And Where Does Personal Privacy Fit In?
One of the big differences lies in where all the information is stored and how it’s utilized. Google’s, Apple’s, and Amazon’s privacy policies explicitly say what information of yours is being used, why, and how it’s being kept safe. In the moment, you don’t think about the fact that your voice or shopping order is being sent up to a corporation’s servers — it’s vital that companies are transparent about their storage and use of information, so let’s delve into some differences.
Alexa processes your requests using Amazon’s cloud-based voice service, Alexa Voice Services (AVS). All the ways you interact with Alexa — your questions, product purchases, locations and third-party requests — are stored and tied to your Amazon account. This is why you see alarmingly accurate product recommendations when browsing online. (“Wait, I only thought about buying those Ray-Bans for a second…now I’m seeing them advertised on every page I visit.”)
It should be noted that Amazon’s smart speakers, like the Echo or Echo Dot, are always listening. However, Amazon specifies that these speakers are only listening for their wake word, “Alexa.” When it’s spoken, the LED ring on the speaker turns blue and whatever Alexa hears next will be streamed to the cloud. When the blue ring stops glowing, the smart speaker is again in its dormant state, waiting to hear “Alexa” again.
One of the biggest advantages of Alexa, aside from the fact that Amazon continuously updates its capabilities and services, is its open API. This allows for any third-party developers to have their products work with Alexa — e.g., Uber, Domino and Philips Hue; companies can also launch new products with Alexa built in, like the Huawei Mate 9 or Ecobee4 smart thermostat ($248). These have to be authorized by Amazon, but they offer another way to get Amazon’s AVS into your home.
Google’s smart assistant, Google Assistant, started to pop up everywhere in 2017. It’s in Google Home and pretty much every new Android smartphone and Android Wear 2.0 device. There’s also an iOS app, as well. The big draw of Google Assistant is that it’s tied to Google; when you ask it a question, the request goes through similar algorithms and massive servers as Amazon’s Alexa, but it’s able to search Google’s entire knowledge base for the answer. Basically, it’s probably more intelligent than other smart assistants.
So what does the Google Assistant know about you? Pretty much everything tied to your Google account, like mail, contacts, storage, or calendar info. If you’re logged into Google and search for something, then Google has that search saved and tied to your account. With the Google Home, Google says it collects your data (questions, buying preferences, location) in order to make their services faster and smarter. The goal is that over time, Google Home or your Pixel phone will be able to provide better and more personalized suggestions and answers.
Targeted, relevant ads are Google’s famous example of how this data aggregation can help you. Most people are okay with that, but if you’re worried about privacy, you can delete your conversation or search history at any time, in the setup app or here. Google also promises that your data is secure, “protected by one of the world’s most advanced security infrastructures. Conversations in Google Home are encrypted by default.”
Data privacy is paramount to Apple. In fact, any information used within Siri is not associated with your Apple ID, but rather only with the device using a random identifier. Apple discloses that some information is encrypted and sent to their servers — your name, contacts, and songs in your music library (whether that’s local files or Apple Music) — but nothing more when using Siri. Further, when using Siri to search photos by location or album name, Apple will never send your photos or any information about them to their servers — Apple claims “album names are only sent to Siri to help provide you with better results.”
The tradeoff for the strongest stance on privacy is that Siri may not be as “smart” as the other assistants. Google and Amazon know quite a bit about you, whereas Apple prides itself on not knowing too much. As a result, Google and Amazon can provide more specific answers at times, and are more contextually aware when asking followup questions. Ask Google Assistant what time a specific movie is playing near you, and then ask where “that movie” is playing, and Google will know which film you’re talking about — Siri will not.
One last thought: it’s easy to compare smart speakers, but Apple’s upcoming HomePod ($350) won’t be anything like an Amazon Echo or Google Home. In fact, right now we don’t know if it will have any third-party skills. Expect limited abilities and a better speaker (plus multi-room functionality!) that only play songs via Apple Music.
What Is Machine Learning?
It’s now a part of our everyday lives — but how does it work? Read the Story