Your own personal Google
Google always pulls out the stops for the keynote at its annual I/O developer conference, and this year was no exception. Google I/O 2018 lacked the flashy flagship hardware that defined previous keynotes—nary a new Chromebook, Pixel, or Google Home could be found—but it still managed to shine, thanks to some serious improvements to the software and services underlying the entire Google ecosystem.
Hardware’s nothing without software that tells it what to do, after all. And at I/O 2018, Google’s software was focused squarely on making the Internet more about you through the power of machine learning. Let’s dig in.
Gmail Smart Compose

Image by Google
Google CEO Sundar Pichai kicked things off with Smart Compose, which is basically Gmail’s Smart Reply cranked to 11. Whereas Smart Reply would scan your emails and intelligently offer buttons with quick one-click responses, Smart Compose uses AI to suggest complete sentences as you’re drafting an email. As you type, you’ll see suggestions appear in faded grey text; clicking Tab uses the suggestion.
“Smart Compose helps save you time by cutting back on repetitive writing, while reducing the chance of spelling and grammatical errors,” Google says. “It can even suggest relevant contextual phrases. For example, if it’s Friday it may suggest ‘Have a great weekend!’ as a closing phrase.”
Smart Compose sounds like a serious timesaver if its anywhere near as effective in reality as it is in concept.
Google Photos

Image by Google
Machine learning is making Google Photos more useful in the coming months, too. While you’re looking at an image in the coming weeks, you might see new prompts offering to fix the brightness of an image, or fade the background to black to make the star of the picture pop. Get this: Google’s AI smarts will even be able to add color to old black-and-white pics.
Just as cool, if you take an image of a document, Photos will be able to create a PDF of it automatically—even if it’s taken at an awkward angle.
Google Assistant

Image by Google
Google Assistant is evolving into your Google Assistant. A flurry of upgrades is coming to the AI helper, including the ability to choose from six different voices and, in the future, even a John Legend voice pack. New features let Assistant respond to natural conversations and parse complex multi-step queries. On phones, the app will be able to show you an overview snapshot of your day.
Smaller upgrades are also on the way, and third-party smart devices with screens will start rolling out with Assistant in July. Read up on all of it in our Google Assistant coverage.
Google Duplex

Image by Google
Speaking of phones, Google Assistant will even be able to call local businesses to schedule reservations for you, conducting complex conversations in real time using Google’s AI smarts and new voices. The machine sounded eerily human in an on-stage demo, complete with ummms and ahhhs in the middle of sentences. The recipients seemingly had no idea they were conversing with a robot.
Google Duplex’s debut was stunning. See it in action here.
Android P beta

Image by Google
Google didn’t reveal Android P’s final name at I/O, but it did launch the next-gen Android OS in developer preview beta form. A previous developer preview launched in March, but the beta version adds Android P features revealed at I/O 2018.
Android P is shaping up to be a substantial update for Google’s smartphone operating system, with new AI-powered features, a major navigation change, and a suite of tools aimed at curing smartphone addiction. Catch up on the newly announced features in our Android P beta coverage.
Google Maps

Image by Google
Continuing the theme of the day, Google Maps is getting an overhaul that uses machine learning to infuse your experience with personalized recommendations. A redesigned Explore tab and new For You tab will highlight local events and restaurants, drawing not only from physical locations but also from what you’ve liked in the past, and trending activities in the area. This summer, Google Assistant will come to Maps as well.
Google also showed off a wild future for walking directions in maps. Tapping into computer vision and machine learning, Maps can create an augmented-reality Street View that overlays directions and business details on your screen in real time. Wild stuff.
Google News

Image by Google
Even Google News is getting in on the personalization action, with an overhauled News app and web presence that makes it easier to find the news that matters to you from the sources you trust. It’s emphasized most by a For You tab that appears when you open the app, but Google’s AI touches every aspect of the service.
That includes a “Full Coverage” section that attempts to give you a cohesive and broad view of any particular story by mapping out relationships between people, places, and things in the story, then organizing it into storylines with frequently asked questions and highlighted tweets from a variety of sources. Google says Full Coverage is “by far the most powerful feature of the app,” but there’s a lot more that’s new. Read up on it all in our article on the Google News redesign.
Google Lens

Image by Google
The entire point of Google Lens is to leverage the company’s strengths in machine learning and computer vision to provide you with more information about the world, but it’s getting even more useful soon.
A new smart text selection tool lets you copy and paste text captured with your camera. Even more useful, selecting a text snippet brings up information about the subject. “Say you’re at a restaurant and see the name of a dish you don’t recognize—Lens will show you a picture to give you a better idea,” Google says. “This requires not just recognizing shapes of letters, but also the meaning and context behind the words.” A fresh style match feature, on the other hand, can show you information about outfits or home décor you like, as well as products with a similar style.
But perhaps most significantly, Lens is being freed from the shackles of Photos and Assistant. Google’s technology will now come baked directly into the Pixel’s camera app, and cameras in (unspecified) devices by LG, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, and Asus.
Linux on Chromebooks

Image by Google
It didn’t make the I/O main stage, but in a follow-up post, Google revealed that Chromebooks are getting Linux support to help developers code on the browser-based laptops. A preview will be available for the Pixelbook soon. Per Google:
“Support for Linux will enable you to create, test and run Android and web apps… Run popular editors, code in your favorite language and launch projects to Google Cloud with the command-line. Everything works directly on a Chromebook.
Linux runs inside a virtual machine that was designed from scratch for Chromebooks. That means it starts in seconds and integrates completely with Chromebook features. Linux apps can start with a click of an icon, windows can be moved around, and files can be opened directly from apps.”
Waymo’s self-driving cars will take passengers for real

Image by Google
Google’s Waymo self-driving car company sought to show its safer side at the keynote. No doubt its rival Uber’s self-driving technology failure, which led to the death of a pedestrian in Tempe, Arizona, in March, was top of mind.
CEO John Krafcik touted Waymo’s AI advances. Several years ago, for instance, Google’s deep neural networks reduced Waymo’s pedestrian detection error rate by 100X. That sounds great, though by digging into the numbers, that error rate started at 1 in 4, and therefore improved to about 1 in 400. We’re sure those numbers are better now, but we’ll see how it goes when the company starts a driverless transportation service in another Arizona city, Phoenix, later this year.