Prolific Learnings from Google I/O 2019
Kevin Miller
Kevin Miller

Prolific Learnings from Google I/O 2019

Kevin Miller, iOS Engineer

Google’s annual developer conference last week was a whirlwind of activity, announcements, and inspirational discoveries. As a Flutter enthusiast here at Prolific, I had a ton of fun exploring the possibilities ahead of us and testing new creations in the Google universe. Here’s what we learned at I/O:

Day 1

Google has centralized their highlights and a number of their mainstage talks here for your reference or if you’re interested in perusing all of the sessions you can view them here.  Google obviously shapes the future of our industry, but it was striking to me in the keynote how much they shape the future of the world. I don’t think it’s an overstatement to say that they (and we) have the potential to affect the future of millions of people.

Below are some of my takeaways from the day and brief overviews of the sessions I attended.

Keynote Address

Don’t have time to watch the whole keynote? Here’s a quick summary video

The Next Billion Users

In the keynote, several people mentioned the next billion users, and this is the market that Google is trying to reach next.  Spoiler alert: it’s not those of us in the western world! The saturation of smartphones and internet in the West is pretty high. The next billion users are people in other countries where the internet is developing, and smartphone access is rare.

Products for Everyone

To reach the next billion users and people who are underserved or not served, we have to build products for everyone.  This means people from different countries and cultures, people with different abilities and impairments, people of different economic and educational backgrounds.  I was struck by a slide in the machine learning model live-coding session I went to. It said, “Behind these machines are humans.” It’s our job to include everyone.

Machine Learning (ML) and Artificial Intelligence (AI) are tools we’ll use to reach everyone.

I was looking back at Claire’s I/O summary email from 2017.  She said:

[With] TensorFlow Transformations Google made their intention clear: Over the past year, top minds at the company have focused on laying the groundwork -- both in terms of new hardware technologies and software innovations -- to improve their products through AI. Their approach of AI first is clearly going to change our world; I felt like I glimpsed the future!

Claire Lynch

That future that Claire glimpsed is here! There were incredible Machine Learning and Artificial Intelligence technologies announced in the keynote including:

  • Live captioning of video for hearing impaired people
  • Live conversational transcription and text to speech for people who are hearing impaired or speech impaired
  • Speech recognition for people with impaired or hard to understand speech
  • AI flood models to provide warning to people in flood zones
  • Live translation and text to speech with the device camera

But make sure privacy is protected

Previously all of the above would have meant sending data to a server, processing it and sending it back.  Google has made a huge breakthrough that allows them to ship the ML models to the device itself so that all of the processing happens locally. The ML models can still be improved; however, the device calculates the improvements and sends them back to Google, without the raw data ever leaving the device.

If there were just one key takeaway, I think it’s this:  Internationalization, localization, accessibility, and privacy are not optional. It’s no longer something that can be tacked on to the end of a sprint “if we have time.” To build innovative and cutting edge products that are industry leading, those products have to be built with everyone in mind.

Day 2

Today has me thinking about abstraction. As developers we rely on abstraction. I don’t have to understand all the underlying technology that allows my phone to display an image of a cute kitten.  All of that logic has been abstracted for me so that I can just say something like

screen.display(image: "kitten.jpg")

A new level of abstraction is being introduced. You can see this in Flutter: I don’t have to write different code bases for different devices or even different operating systems. I can just write one codebase that can adapt to almost any device or OS.  Much of the tedious logic is abstracted for me. You can see this too with all of the Machine Learning tools that Google is providing to developers. I don’t have to know how to code a machine learning model to use machine learning.  Each level of abstraction that’s introduced leads directly to advances in technology because we have more brainpower to devote to problem solving and creativity. It’s very exciting stuff!

Swift for TensorFlow

Introducing Swift for TensorFlow is an effort to remove boundaries to machine learning development. Right now it’s in the early stages (0.3) but their goal is that every developer can be a machine learning developer. The barrier to entry for complex machine learning tasks is pretty high right now. Introducing a high level, user friendly language like Swift helps lower that barrier.

Swift for TensorFlow is interoperable with the current C++ and Python libraries that are popular with ML researchers and developers now and provides first class differentiated programming.

 

Building for iOS with Flutter

 

I was tempted by this presentation to build iOS apps in Flutter rather than natively.  It’s so fast and easy – and nearly indistinguishable from native. It’s also easy to go from an app built in Flutter in the iOS (Cupertino) style to an Android style.  As long as it’s designed thoughtfully, these UI elements can be switched from one style to another based upon what operating system is running the app.

 

The Power of Looking Up, with Astronaut Mae Jemison

Amazingly inspirational! Mae Jemison was the first African American female astronaut to travel to space.  She is now the founder of the 100 Year Starship organization. Their goal is to make interstellar travel possible in a hundred years. This seems crazy, but her philosophy is, “The impact of technologies is proportional to their audacity and public accessibility.” The idea is certainly audacious! The public impact of space travel has already proven to be tremendous – from hurricane detection to medical advances.

Sheperd Doeleman from the Harvard Smithsonian Center was involved in the first photograph of the black hole. He talked about how they used multiple telescopes all over the world in conjunction with supercomputers to create a virtual, earth-sized telescope.

 

Designing for Accessibility

Speaker Elise Roy talked about designing for everyone. I loved her point that disability affects all of us. For most of us it’s only temporary, such as a broken leg.  Think about trying to open a door while carrying a large, heavy box. That’s disability, too. For some of us it’s not temporary. With this perspective it’s not “us” vs. “them” – it’s all “us.”

“It’s not about doing good, it’s about good design.”

Elise Roy

Here were some other key points:

1) Average is useless. Who is actually average? Designing for the “average” person leaves almost everyone out.  Fighter planes that were designed for the “average” sized pilot crashed more often because none of the pilots were actually average sized.  Instead they are safest when designed for both the tallest pilots and the shortest pilots

2) Designing for disability uncovers hidden needs and problems. Some examples: the remote control, audio books, and gesture recognition were all designed initially for accessibility purposes. Products designed for people with intellectual differences are simple and intuitive. Products designed for people with arthritis are easier for everyone to use.

3) People with a disability have unique skills, discover them!

If you’re interested in discovering more about different organizations making huge leaps forward in designing for accessibility, Project Euphonia is a project started to bring speech recognition to people with hard to understand speech.  Machine Learning models are trained with standard speech using tens of millions of samples. Browser based and completely private ML training is allowing Project Euphonia to crowd source non-standard speech samples to help bring speech recognition to everyone.

As Machine Learning becomes more accessible for developers, the creative application begins to appear limitless. Take Yacht and Wayne Coyne (of the Flaming Lips) for example, they talked about how they use ML as a tool to create music.  Yacht fed the entire catalog of their songs into a machine learning model and generated melodies. They then fed lyrics from their favorite artists in to another ML model to generate lyrics, which was super cool! You can discover more here: g.co/magenta or here g.co/magenta/groove.

 

Beyond Mobile: Building Flutter Apps for iOS, Android, Chrome OS, and Web

This was an awesome demonstration and discussion of taking a Flutter app (Developer Quest) and running it on devices like iOS, Android, Chrome OS, and web. I’m convinced that Flutter is going to provide a solution for many pain points – not the least of which is trying to manage and maintain parity between codebases for Web, iOS and Android.  Flutter for web and desktop isn’t production ready yet, but it’s incredibly promising. Imagine instead of having three engineers (one for each platform) working separately, you could have three engineers all working on the same codebase.

Augmenting Faces and Images

Amazing new improvements to ARCore – including a tool called Augmented Faces – which will be available on iOS too! Augmented faces uses ML (trained with faces from all over the world) and offline processing to allow facial tracking and mapping without a depth sensor on the device. This means we will be able to bring face mapping and AR to a wide variety of devices  – not just the newest ones. They showed a live demo of it working perfectly on an iPhone 6s.

Augmented images is something that will be useful; Google will allow you to register a database of up to 1000 unique images that can then be tracked with the camera on the device. The SDK will provide 6 factor (x, y, z, pitch, yaw, roll) info back to the device 30 times per second, allowing developers to anchor our own UI in space. That’s how Google created the AR maps at the conference!

Day 3

Codelabs and sandboxes today! I had a lot of fun playing with some of the new technology discussed over the past two days. I integrated the Google Maps SDK in a Flutter project, incorporated ML Kit in an iOS app to detect and track objects with the camera, and did a basic “Hello, World” project in TensorFlow!

Playing with some of the AR demos in the AR sandbox was super cool. Check out the video of the AR mannequin – the application of the technology will be incredibly useful for some of our partners as they look to incorporate mobile devices into their in-store experience.

Additionally, there were a number of powerful sessions discussing more industry shifting concepts:

Pragmatic State Management in Flutter

One of the tricky elements of Flutter development is state management. As Flutter evolves, so do best practices. This talk is full of the latest recommendations for keeping up with the cutting edge.

Building for the Next Billions Users

I highly recommend watching this video! As I mentioned earlier more people (billions) will have access to phones with apps and this talk has a detailed discussion of best practices. When building apps for people in countries where devices may be old or internet connectivity is minimal, we have to be mindful of data usage and app size.

 

Three New Ways to Authenticate Your App with Firebase

I would highly recommend exploring Firebase Authentication – the functionality allows many of the risky security elements of apps to be handled by Google. Security and privacy are a high effort and high risk aspect of app creation. Firebase is a worthwhile solution to make sure that user’s data and privacy is handled well.

Final Thoughts

Thinking about the next billion users was the pervasive theme that was discussed throughout Google I/O this year. Google is building upon the creative applications of concepts discussed in years past (i.e. in 2017) to power their strategy for 2019 and beyond.

I’m incredibly excited to see how advances in technology and developer tools will free us to solve problems with creative solutions — for everyone around the world.