As a cloud provider that's had a later start than Amazon Web Services (AWS) and Microsoft Azure, it shouldn't be surprising that Google Cloud has been on a hiring spree. But it's notable, especially at the upper management level, that many of the recent hires have come from the ranks of established enterprise vendors. The latest is Gerrit Kazmaier, formerly of SAP, as head of Google's Data and AI Cloud. These new faces have provided the outside-in perspective that was long missing from the Googleplex.
Not surprisingly, there's been a subtle change in tone. Google has not necessarily abandoned the “Run like Google” messaging — something that's always been core to the appeal of Google Cloud. “Run like Google” still comes out when Google discusses its approaches to zero-trust security, AI, and the global backbone that carries its traffic. But it's also increasingly about meeting enterprises where they live, along with applying a touch of Google automation — such as the recently-introduced database migration service that makes lift and shift migrations of MySQL and PostgreSQL more of a pushbutton experience compared to similar services from AWS and Azure.
Even if the underlying technology is unique, such as with Cloud Spanner or the AI portfolio, Google is wrapping those services with more familiar onramps. Last week at NEXT, Google announced the preview for a PostgreSQL API to Cloud Spanner. Before this, enterprises looking to implement this service, which provides a distributed, global-scale, consistent global transaction database, had to learn a new platform. As Big on Data bro Andrew Brust covered in his data and AI roundup, Spanner now sports a PostgreSQL shell atop its unique storage engine.
Google's adding of the PostgreSQL API is not all that unique; AWS and Azure employ this for Aurora for PostgreSQL and Azure SQL Database for PostgreSQL Hyperscale, respectively. In fact, using APIs for application-level compatibility with open source databases is becoming commonplace in the cloud. The emerging design pattern involves developing cloud-native storage engines that are canonical, then applying APIs to the make the databases look familiar, replete with the same calls and data types. AWS employs such approaches for Aurora, DocumentDB, and Keyspaces for Apache Cassandra; Azure uses this design pattern for Cosmos DB and PostgreSQL Hyperscale; Google is doing likewise for Cloud Spanner and Firestore.
Spanner provides a good example of how Google Cloud differentiates and accommodates. Underneath the hood, Spanner has architectural differences that, for instance, dispenses with capabilities such as stored procedures, requiring developers to put all the logic in the application tier. Yet with the PostgreSQL API, the data types and commands should remain familiar. Although at this point, we don't know how much coverage of PostgreSQL's PL/pgSQL is supported.
Google is also meeting customers where they live with its open source database partnership program that counts Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4J, and Redis. While the managed DBaaS services of most of these providers are also available on AWS and Azure, there are joint sales and support on Google. The underlying message is that you don't have to change the relationship with your open source database provider to get support from Google Cloud. As we've noted in the past, we're hoping that Google Cloud also starts extending native integrations to Cloud Dataflow, BigQuery, and Vertex AI — services that could extend these database services to end-to-end solutions akin to, for example, Azure Synapse Analytics.
It shouldn't be surprising that, with leadership that comes from the enterprise platforms space, there is increased emphasis on solutions. While Google is not unique in offering solutions, the chart above shows that the portfolio has vastly grown since the introduction of contact Center AI a couple years back.
Part of the solutions focus is bringing together synergies in the platform. This is a common challenge for each of the cloud providers, whose portfolios of services have inflated to the point of becoming practically overwhelming.
Data and AI are prime examples where blended services could make customers more productive. Google's announcement of Google Earth Engine, with the ability to feed geospatial metadata to BigQuery, is a new integration that leverages capabilities for analytics coming from the parent company.
But what about putting the pieces that already exist in the portfolio together as integrated or blended services? We let out a hint earlier about extending the reach of the open source database partners to its various data, analytics, and AI services. Azure, SAP, and Oracle have already hit the ground offering blended data warehousing cloud services extending from data transformation pipelines to AI and visualization. There's plenty of potential for tighter integrations with the operational databases that comprise much of the open source partner portfolio. On Google's part, there are too many obvious synergies between BigQuery and Looker, for instance, not to mention Dataflow and Vertex AI.
With Looker, there's clearly a balancing act; to its credit, Google has not relegated Looker to becoming a captive service, supporting only Google Cloud and Google data platforms. And yes, before this, BigQuery was one of Looker's supported sources. At NEXT, we saw some of the steps toward tighter integration with other Google Cloud services, such as Connected Sheets, that can now be represented as a Looker Block in its semantic layer. We'd like to see the same capability extend to pipelines designed in Cloud Dataflow, something that you could probably perform manually now.
Multicloud is another example where Google Cloud is seeking to meet customers where they are. Admittedly, Google is not the only cloud provider making noises about running its control plane in foreign territory; Microsoft is promoting the same capability with Azure Arc. But Google, as challenger to AWS and Azure (who got started earlier) is not surprisingly embracing multi-cloud on more fronts. It is saying to customers, “We know that you probably already have data in other clouds, so let's bring analytics to where it lives.” At NEXT, it announced the GA of BigQuery Omni, which can extend analytics across multiple clouds. We hope that it also provides a means for minimizing data egress charges from foreign territory.
Speaking of egress charges, we'd love to egg Google Cloud: why not drastically slice or get rid of egress charges completely? That would further encourage AWS and Azure customers to take advantage of Google Cloud services without fear of paying a penalty.
Probably the most interesting multicloud announcement coming out of last week was the preview of Distributed Cloud. It is a hybrid cloud platform that can be run on Google's network edge (at any of its 140+ PoPs globally); inside a telco operator; at customer edge locations such as factories or retail stores; or at home in the customer's own data center. Google Distributed Cloud is a managed cloud, but it could be managed by the customer, a Google partner, or Google itself. There are some parallels with Amazon Outposts, Amazon Wavelength, Oracle Cloud@Customer, IBM Cloud Satellite, and HPE GreenLake — although there are lots of differences in the scope of what each of these hybrid clouds offers.
While Google may not be unique in automating database migration, adding familiar PostgreSQL APIs, offering solutions, or expanding hybrid cloud strategies, the fact that it is not solely telling customers to change everything and do things in the Google way is a significant shift in tone.
Disclosure: Google Cloud is a dbInsight client.