Going into Google NEXT, I expected announcements across the entire cloud business. Between their recent (positive) financials, customer traction and (more importantly) growth, and a rapidly expanding list of offerings and managed services, I expected announcements to focus on what I view as their areas of differentiation: foundational built-in security by design, commitment to open-source technologies in their own and across clouds, core AI and ML solutions and services that are embedded/ integrated across GCP services, and easy-to-use cloud-native collaboration products in GSuite. While I’ll let my colleagues share their thoughts on security, mobility/devices, and dev/ops, I’ll be focusing my recap on analytics, artificial intelligence, and machine learning.
Google is very clearly committed to empowering organizations to do more with their data and bring more intelligence to it, whether leveraging Spanner as a globally distributed database, BigQuery as a fully managed data analytics core service, or Tensorflow as a massively popular open source framework for machine learning.
Google continues to have a clear focus on eliminating the worry of infrastructure – organizations should be focused on their most valuable asset, their data. And as organizations continue being told that they must embrace AI to remain competitive, data must first be ingested, prepared, and processed prior to leveraging AI/ML. BigQuery already helps solve that problem, but the first announcement that really caught my eye was - BigQuery ML. BigQuery ML extends the technical and economic value of base BigQuery to machine learning. Now organizations can build machine learning models directly in BigQuery using simple SQL, enabling the already empowered business analyst or IT generalist to do even more. Additional announcements (not all of them) that I thought were interesting and impactful:
- BigQuery connector to Sheets – Query data directly in a Google Sheet. This will unlock an easier way for business analysts to organize query results and more importantly share insights.
- Data Studio Explorer update – Single-click dashboard creation directly from BigQuery. This means you can quickly visualize results. The cool part here is that if you don’t like the visualization, simply click again.
- BigQuery Geolocation Intelligence Service (GIS) – If you’re doing geospatial analysis in SQL, BigQuery has added new functionality and support for SQL/MM Spatial standard data types.
- Dataflow Streaming and Python – Dataflow will now leverage a new streaming engine that enables a more responsive experience by autoscaling on decoupled compute and storage and support for developing pipelines using Python APIs.
- Dataproc Autoscaling – Automatically scale Hadoop and Spark clusters based on incoming requests.
Internet of Things
While GCP already provides a comprehensive set of services to help with the storage, processing, and application of machine learning on IoT data in GCP, Google recognizes that, to handle the real-time needs of IoT, software and hardware needs to be available at the edge, or even better on the devices themselves. To that end, there were two announcements that I found particularly compelling.
First is Cloud IoT Edge, which is enabling organizations to take the GCP software and intelligence from the cloud out to the edge. There a few components that make up Cloud IoT Edge. Edge IoT Core is focused on securing the connection and exchange of device data between Core and the cloud, while Edge ML helps with running inferences on edge devices using pre-trained TensorFlow Lite models. The last component, Edge TPUs, are freshly announced ASIC chips purpose-built to not just collect data on edge devices, but enable organizations to run TensorFlow Lite ML models on them too. In other words, real-time sensor data collection and analytics. And the interesting piece on Edge TPUs is that they are tiny – you can fit up to four of them on a penny – meaning a small power footprint. Coupled with high performance, you get an offering with a currently unmatched (based on what I’ve seen in the market) performance/watt value point.
We’ve heard the challenges with AI and machine learning, and our research supports it – cost and complexity of the infrastructure, lack of data scientists and trained IT staff to support the infrastructure and workflows, a need for customized solutions based on a specific use case only applicable to your business. Most of this boils up to a need to simplify, or as GCP puts it, a need to democratize AI – to make AI easy, fast, and useful for customers and partners. And they’re putting their money where their mouth is with an extensive list of announcements across their AI Platform, AI Building Blocks, and AI Solutions
AI Platforms are focused on providing ways for organizations to build and flexibly deploy high performing machine learning models. This incorporates everything from Kaggle and Kubeflow to cloud TPUs. One of the stats during the keynote I found impressive was the growth in Kaggle’s user base – 2 million members with access to 7,000 data sets to share best practices, code, and data, and learn from peers. With Kubeflow, organizations can deploy ML workflows on Kubernetes on hybrid infrastructures and the team announced improvements to their UI, as well as monitoring and reporting enhancements. While Tensorflow is an obvious staple of not just GCP, but the industry, alternative libraries for training and classification on Cloud ML are being released, XGBoost and scikit-learn.
AI Building Blocks are where GCP’s goal of democratizing AI really shine by providing a suite of pre-trained ML models that enable organizations to easily leverage AI technology within their own applications. I could write a 10-page paper on this, but with a goal of being somewhat brief, this incorporates everything having to do with AutoML including Vision, Natural Language, and Translation. Vision is focused on gaining insights from images and videos, Natural Language is helping to find meaning in unstructured text with the ability to work between languages, and Translation enables the ability to customize translation models by factoring in domain-specific terms, jargon, or slang.
With AI having a mass customization problem, AutoML is looking to simplify the consumption of AI by providing these services to customers. You can think of AutoML as an ML model that creates ML models, enabling those without ML experience to benefit from the technology.
Finally, with AI Solutions, organizations gain pre-built solutions or reference architectures based on common use cases that enable organizations to leverage AI in their workstreams. The big announcement was Contact Center AI, which is a new solution that incorporates DialogFlow (and all of its new features/functionality) to help live agents by providing real-time analytics based on natural language processing of end-user requests or questions in a contact center or support center environment.
Between the financial results, customer traction, and announcements (over 100 across all of their offerings/services), the GCP vision is coming to fruition. And I say “coming” in a positive way, because the team has made an incredible jump in just the last few years. According to GCP, it’s not all about where they land in the competitive cloud market. While that’s important, the market size for all of IT is in the hundreds of billions of dollars and cloud adoption makes up around 10% of that. In other words, there is so much opportunity to go around right now, that’s it’s more about recognizing the value of cloud than who has a better cloud.
Every organization is becoming a data organization. They want to derive value from their data with easy-to-use tools and services that empower the entire business. And with Google’s focus on leveraging an arsenal of integrated services anchored in open-source technology infused with security and AI/ML across cloud-based collaboration tools, they are set up for success.