googlehome

What search is to the web is what Google wants its Assistant to be to devices.

Yesterday, Google announced the integration of its Assistant with the Pixel handsets and Google Home, and indicated how these would work with multiple services, including YouTube, Maps, Street View, among others. Rick Osterloh, SVP (Hardware) at Google said that “Building hardware and software together lets us take advantage of the Google assistant”, and that “The next innovation will take at the intersection of hardware and software with AI at the center.”

Now we’ve seen the Google Assistant, and its integration with Google’s messaging app Allo. Last month, Google, at an event in Delhi, announced that the Assistant will support Hindi in its messaging app Allo, and will eventually be integrated into more products. Some of those products were announced yesterday: namely, the Pixel phone, which has the Assistant integrated, and Google Home, an audio based device which can be used to search and play music, book services, and integrate with supporting devices such as Chromecast.

On screen context

Google Pixel is the first phone with the Google Assistant built in. The phone accepts voice commands, and has a single hot button for search using the voice command, effectively replacing Google Now. Users can use the assistant to send an SMS, or search for information based on what’s on screen. What was remarkable about the demonstration was that the assistant (unlike in the Allo app), can determine context from what is on the phone screen because of this deep integration: if can point out restaurants on a map based on certain criteria you mention, but it can also do additional tasks such as booking that particular restaurant using OpenTable. More on Google Pixel here.

Google Home: Audio assistant

Similar to Amazon’s Echo series (Assistant is to Home what Alexa is to Echo), the device allows users to get answers from Google, play music, adjust the volume. More importantly, Home has a mic-mute button which gives you control over your privacy, which means that without it, the device is listening to everything that is going on. You can ask Google Home to play a song, and it can play it off YouTube. The device supports Tune In Radio, Spotify, I Heart Radio, YouTube Music, Google Play Music. Users can also set a default music service, and what’s important is that music search is powered by google, so it can infer context: you can ask for the latest song from a band, and the Assistant will figure that out. Home supports podcasts and news, and you can cast information into Google Home from your phone.

Home also helps users get traffic information, do classifieds searches, set timers, add things to shopping lists. Google has a feature called Myday, which can give you an overview of your day ahead – weather, commute to work, your schedule (probably via calendar). It effectively pulls in data from multiple Google services for you.

Home also allows using voice control for TV and audio via Chromecast, allowing you to pause content, or change the volume. “It can change how you watch TV. It’s handsfree voice control.” Netflix will soon support voice casting via google home. Home also works with Google photos to show photos on your TV.

Google supports interplay between multiple Home devices: they are context aware, only the device which hears you best will work, when you are within the range of multiple devices. Home is priced at $129, comes with a six month trial of YouTube Red, and is available in the US with Google store, BestBuy, Walmart and Target.

Also read: Google’s new hardware pushes for VR, interactivity and 4K video

The tech supporting Google Assistant

At the launch, Google CEO Sundar Pichai spoke about the fundamental improvements in how Google maps and analyses information, which enable this:

1. “Image capturing is how computers try to make sense of images they look at. Newer machine learning systems has improved from 89.6% to 93.9%. Every single percentage point translates into meaningful difference for users. This helps google photos find photos you’re looking for.” Pichai illustrated how Google is able to assess more about a photo now than ever before.

2. Machine translation: “historically we’ve translated at a phrase by phrase level. We recently announced our first end to end deep learning translation, which learns at a sentence based level. The progress from chinese to english has been amazing. We’ll translate billions of queries over the next year. This will allow us to translate on the fly, even if the person asking the question doesn’t know English.

3. “Wavenet is a deep learning model, helping generate a much more human sound. Today we can do a single voice of assistant in multiple language. We’re trying to build an individual google for each individual. It’s equally important to get the assistant in the hands of each user.”

Google Assistant partners

Partners can integrate with the Google Assistant. To begin with, it supports external services like Nest, IFTTT, Philips and Samsung Smarthings. It is essentially becoming a single control device for a connected smart home. Google announced an open developer platform which will allow anyone to build for the Google Assistant.

There will be two kinds of actions: direct and conversation actions. In some cases, the instructions are for media requests (Spotify), Home automation (dim the lights). Some things take discussion, which are conversation actions. Like, I need an Uber”, and the assistant brings Uber into the conversation, asking for destination, type of ride (Uber, UberX etc) and provides info on driver. Developers and local businesses can create these kinds of conversations. People shouldn’t need to install new apps. Just ask the Google Assistant and Google will find the right kind of help.

Google is also developing an embedded google assistant SDK for other device manufacturers, which will launch next year.