Google I / O was full of interesting Google news this year, especially if you use the digital assistant of the company. AI is at the forefront of Google’s development. Especially the steps taken with the Google Assistant are big, but there was also enough to report about the other parts of Google. If you want to watch the whole keynote yourself: the video is there.
Maps, Gmail, and photos
Google Maps gets better when you use it a lot to find restaurants and other things. The chance that you can find something that you like is increased because you are better matched with your preferences, it is easier to book or make an appointment via maps. A nice addition is that you can now see a new way of navigating on foot because the camera is also used to navigate. So you never have to figure out which direction the south is and you get a visual marking of where you need to go, with the map below. Cool.
Gmail is also edited by AI and especially in the response. You can automatically predict what you are going to type in the context of the message you are responding to and you can start typing things like “address” to get your full address as a suggestion. Logically, this function is not yet available in Dutch, but will be rolled out for the English language next week.
Photos are also improved by AI, because not only are your photos automatically enhanced by artificial intelligence, you can even have black and white photos colored by the AI! A nice addition for business users is that you can take a picture of a document and Google photos automatically cut it out and convert it to a PDF file that you can forward.
Google is going to try to make sure that people get less burden from FOMO, Fear or Missing Out, by making you aware of the number of times you your phone picks up, how many notifications you get, to make sure that you occasionally get a push to do something else and to make sure that you only have to deal with the things that interest you. You can also protect yourself against yourself and put a time limit on an app that you do not have to respect, but remind you that you had a goal.
Family Link also belongs to this: it is a way to ensure that children do not stick too much to their screens and to ensure that they are also stimulated via their device to behave on the internet, just as they do have to do in real life. To top it off, you can place your phone on the table with the screen and then shush mode and all notifications (except for pre-programmed important contacts) will be stopped. Even if you have trouble getting to sleep because you are busy with your phone, there is wind down mode, which slowly makes your display black and white, so you automatically put the phone away when all color has disappeared.
Google Assistant upgrades
The Google assistant also gets a big upgrade with natural conversation and multiple actions. This means that the assistant will now answer all subsequent questions you ask after you say ‘Hey Google’ once. The assistant can also answer questions such as “when was Albert Einstein born and when he discovered the theory of relativity?” In addition, the assistant now also reacts politely when you ask a question and say ‘please’, something that seems to be intended primarily to teach children that commands call people and things and that immediate results are not right.
The integration into other smart devices of the assistant also has to make life better. YouTube TV makes it possible to watch more and more ‘normal’ TV via YouTube and of course also via smart devices such as smart screens with the assistant integrated.
Also in smartphones, the assistant gets better: you get to see better options in your screen for certain questions related to the Internet of Things or companies that accept orders remotely. The example used was a Starbucks order that could later be picked up and changing the temperature, immediately showing a temperature controller on the screen.
The assistant is also built into Google maps, so you can immediately ask for a route and let the car play music, or have a message or your arrival time communicated to the home front.
The most bizarre demo was an AI who literally makes a call to a hairdresser from an assignment to the Google Assistant and makes an appointment ‘for a customer’ and then another AI who tries to make a reservation for four persons and finds out during the conversation that this is not necessary. Even confusion on the human side is solved and although it can not be checked whether this was a real telephone conversation it would be groundbreaking if it were.
According to Google, that AI will be rolled out in the coming weeks as “an experiment” and probably only for English-speaking users, but the implications of this technology are genuinely insane.
The Google News Initiative has been around for a long time, but Google wants to take responsibility and has completely renewed Google News and wants to make sure that – again with help from AI – is correct, get appropriate and personalized news. When you open the app, you will receive the five news items that Google thinks you should see, and then it goes deeper and deeper towards your own preferences. Local news is also added and as you use the app you can choose where you want to see more or less.
An important part of this is the new newscasts. These are short previews of news reports that are automatically put together with image, text and possibly video with help from AI. From there you can opt for ‘full coverage‘, so that you can view a message from different angles because content is sought around a news item from different publications and questions. Very useful if you really want to dive into a topic or want to know why a story gets attention. Google calls it a true 360 degree view of a subject, from various familiar publications.
Google also wants to do something for the publications that produce all that news. You can follow publications and subscribe to them via Subscribe with Google. Because your data is known, subscribing is easy and you can then easily read that publication on any device. Oh, and it is being rolled out this week. Take that, Apple!
Also for Android it is AI that hits the clock. Adaptive battery uses artificial intelligence to see which apps you use when and everything that is probably no longer going to shut down and thus save your battery. If it works as well as Google says, it will prevent many battery crises.
The Android interface has also been given a new design that has simplicity first. You no longer have a home screen as you are used to, but a search bar and five apps that the system thinks you want to open. You can swipe up to see all apps and you can swipe back and forth between the apps that are already open. Other small things like an automatic rotate button when you hold the phone horizontally and adjusting the volume knob make it go, but the navigation and simplification is the main difference.
App actions should make it easier for you to get into your rhythm. If you always run at a certain time and pick up your phone, Strava is ready to start up. If your headphones are plugged in, you get a number of options, whether that is resuming the podcast or music you were listening to. Slices are another aspect of this. These are small pieces from an app that can be brought up if you are looking for something. If you type ‘lift’ you can immediately show a slice from the Lyft app that shows you what it costs to get a ride home.
In addition to the already mentioned digital wellness adjustments, Google also announced that an Android P beta will start on Google’s phone, but also that of a number of other manufacturers, such as OnePlus, Sony, Nokia, Vivo, Mi and Oppo. No Samsung or LG so, but that was there: they are still too much attached to their own versions of Android.
Waymo was of course also involved, but that was mainly about the progress that the company is making, rather than about the concrete plans that exist, except for the continuation of the tests that are currently being performed. There are already 6 million miles on real roads (5 billion in simulation) by the self-driving cars of Waymo and the neural networks are mainly trained in predicting the actions of other road users. Even driving in bad conditions such as snow, rain and even storms is getting better and better with advances in the measuring equipment in the Waymo cars. Sounds all right, but the self-driving car race is being taken step by step and it will continue for a while.