Google held their annual Google I/O event on Wednesday, May 10, 2023. As the flagship event of the year, Google I/O is an incredibly exciting chance to discover what’s on Google’s roadmap for Android and the rest of their technology ecosystem. And this year, artificial intelligence was at the forefront of the conversation, with a variety of announcements that make it clear that AI is a critical part of Google’s technology strategy, and how pervasive the technology is quickly becoming within the industry.


Large Screens, Foldables and Connected Devices

Google confirms that it continues to bet on integrating different form factor devices into its ecosystem. With the continuous adoption of tablets and foldables, Google announced both the Pixel Tablet and the Pixel Fold, joining the foldables market with other brands like Samsung. This will likely drive more adoption for these form factors, and apps will need to provide better support for large screens, as well as devices with different screen combinations like foldables. Google already guides how to support these types of devices, and the Google Play store provides a channel for featuring high-quality large-screen devices in the platform, as well as providing large-screen store listings that users will see when navigating the Play Store using one of these devices. These are great ways of increasing app ratings and getting your products to many more users.

Connected devices like watches are another technology with high growth, with Wear OS as the fastest-growing watch platform. Products like WhatsApp already provide a Wear OS-only experience as announced in the keynote. With the help of declarative UIs like Jetpack Compose, building UIs for watches and TVs has never been easier.


Bring the Power of AI to Your Apps

Many new AI features were announced for a variety of products and services across the Google ecosystem. The good news is that you will be able to also use the AI models that made this possible in your own apps and products and tune or configure them for your specific needs. For example, the PaLM API will allow you to build generative AI applications for use cases like content generation and dialog agents.

Three particular models and corresponding APIs were announced that will be available to developers: Codey, which can be used to generate code specific to a specific codebase; Imagen, which generates images based on text prompts; and Chirp, which translates speech to text that can be used to feed into other APIs or provide a powerful voice-based interface to users.

By leveraging these types of APIs, Google can handle some of the model-specific implementation challenges of an AI app concept, allowing developers to focus on building a best-in-class experience around these AI tools and services.


Conversational Experience Interfaces

Tools like Bard and ChatGPT are based around the user experience of interacting conversationally, interactively, and iteratively. And customers regularly look to chat bots within mobile apps and websites to help answer questions, resolve issues, and receive support.

As users become more used to interacting with conversational experiences like these, and as they become more common, we expect more app interfaces to potentially start trending in this direction as well. These design paradigms introduce new challenges and problems to solve, but they bring a unique experience to users that can substantially reduce friction, expedite usability, and improve accessibility.


User-facing AI

AI is becoming increasingly front and center in the user experience. In the past, AI was often used invisibly to power personalization features, such as recommending products or suggesting content. However, at Google I/O 2023, we saw a number of announcements that highlighted the company’s commitment to making AI more visible and engaging for users.

For example, Google announced a new “AI-generated wallpaper” feature that allows users to create personalized wallpapers based on their interests. And additions to Google Docs, Sheets, and Slides provide direct controls that explicitly insert AI-generated content into documents. This is a positive trend for products, as AI has the potential to make our lives easier, more efficient, and more enjoyable.

Bard Extensions

As part of the Bard segment of the presentation, Google briefly revealed Bard extensions, which will allow developers to create plugins that run within the Bard experience. A demo of Adobe’s Firefly integration showed how images could be generated by AI and seamlessly inserted into the Bard interface. Google also teased other integrations from Instacart, Indeed, Khan Academy, Spotify, and many more.

Documentation on building extensions is slim thus far, and we expect it’ll be locked down to only specific partners to start with. But we’ll be on the lookout for more details, with the expectation that, as Bard continues to grow, building extensions for it will bring amazing opportunities to build “apps” for the brand new AI platform. We’re also excited about the potential to build “apps” for other AI-based platforms in the future as well.


AI Ethics

Google made it clear that they’re thinking extremely carefully about the core ethical concerns around AI. A couple of interesting topics included a commitment to Google tagging their AI-generated image content with identifying watermarks and metadata, and exposing controls within Google search and Bard that will allow users to examine the validity and origin of content like images.

Google has stood by their belief that their approach to AI must be bold and responsible, and it’s clear that they take the “responsible” part seriously. AI-generated content can have a pervasive impact well outside of just the technology industry, and it’s reassuring to see Google taking a firm stance and putting measures in place to protect the integrity of information and facts.

Even a mid-show segment about their Google I/O Flip game included a disclaimer that Google worked directly with artists to ensure their card game visuals were generated by “using AI responsibly”. It may not be a stretch to imagine a future where “nutrition sticker” disclosures like this could be found on other types of content, guaranteeing that content is authentic and/or generated “ethically”.


Flutter

We’re always eager to see the developments around Flutter, and it’s clear Google is investing substantial effort into continuing to improve Flutter’s web support. Our team recently completed a build of a web-first Flutter app, and building and deploying it was a smooth and empowering experience. We’re excited to see this support continuing to mature and become more production-ready, and for web to become more of a first-class citizen in the Flutter platform.


Next Steps

We’ll be paying careful attention to the upcoming sessions and announcements from Google, and are also eager to dig into their documentation, previews, and Labs experiments to fully explore everything coming down the pipeline. We look forward to connecting with our clients soon to help map out their strategies. We're also eager to chat with our partners and friends in the community about your roadmaps for the coming year!


(This article was proudly written by 100% real humans!)



Please provide your contact information to continue.

Before submitting your information, please read our Privacy Policy as it contains detailed information on the processing of your personal data and how we use it.

Related Content

Website hero image
News

VML Shines at the ANDY Awards

Bringing home nine Golds across the Craft and Idea categories, our teams' dedication to creativity and impact shines bright
Read Article
VML Ford Ranger Ranger 1
In The Press

Ford Ranger Ranger Campaign

Ford Unveils latest 2024 Campaign for New Zealand's Number One Selling Vehicle – The Ranger
Read More