I recently joined Chris Presley for his podcast, Cloudscape, to talk about what’s happening in the world of cloud-related matters. I shared some of my observations from the recent Google Cloud Next ’18.
Topics of discussion included:
- Google for Enterprise focus from Keynote speeches.
- GKE On Prem
- Cloud Functions – finally GA
- Serverless containers on Cloud Functions (early preview)
- Big Query:
- Table Clustering
- BQ ML
Google for Enterprise focus from Keynote speeches
Google Next is a huge congregation of Google folks and all the other customers who come together to discuss what’s next on Google Cloud Platform. It has experienced tremendous growth over the last few years, almost a double-digit growth in terms of the total number of attendees.
This year, the conference had more than 20,0000 people who attended to learn about the newer technologies and newer ways of using GCP that have been coming into the market.
After hearing all about this at the show, it really seems that GCP is really making the right bet – the right technology choices and the right platform choices. It is very heartening to hear about a lot of the major announcements from Google, especially in terms of it being geared towards an integrated ecosystem of cloud applications.
There were two major themes, I took away from Google Next. One was the branding theme of “Made here together” focused towards developers and the other was the enterprise focus.
Because Google is using a lot of open-source technologies, they aim to reach a very involved tech community who is contributing to the core open-source technologies that Google uses to further its cloud platform.
Essentially, they are catering to the demands and needs of the developer community and they want to bring the developer ecosystem and the tech ecosystem into the cloud platform so it is a collaborative journey for many different customers. This was definitely one of the prominent themes of the event.
The second thing that was very interesting was Google being enterprise-ready. Previously, there was a perception that Google was not as friendly to the enterprise in terms of enterprise functionality and features that were traditionally required by most organizations.
One of the key things this year was Google really promoting the story of how Google is enterprise-ready and how Google is an enterprise company, as well. They also hosted different large enterprises who talked about how GCP has been adopted within their enterprise. They showcased the pedigree that Google has in terms of enterprise adoption at this time.
This was a huge focus – that enterprises need not be worried about Google’s enterprise-readiness and they can adopt Google Cloud Platform.
GKE On Prem
The Google Kubernetes Engine (GKE) On-Prem is the biggest highlight of multiple announcements and one of the things I was most excited about. Kubernetes has seen a very wide adoption in the enterprise for container orchestration. It was Kubernetes that had the open source project available in on-prem environments but now, Google has allowed GKE to be hosted and leveraged and used within that on-prem environment, as well. If you use container orchestration within your on-prem environment and want to move to the cloud environment, you can do that seamlessly.
There are a couple of major components to this announcement which tie the whole ecosystem together.
The first is that these services are now available to what they call the “cloud services platform,” which is essentially an integration of the various cloud services, open-source with Google infrastructure, operations and security.
This lets customers improve performance and reliability, as well as maintain governance through a single portal and a single management interface, controlling and benefiting from the open ecosystem. Now that you have GKE on-prem and on the cloud, you have a single management mechanism to manage all these services. You have the governance to successfully deploy your containers and have a very good orchestration play in place.
The second part of this is that we already know the benefits of containerization and the benefits of Kubernetes. One of the key things was the launch of an open-sourced project called Istio. Istio addresses the gap in governance, management and communication capabilities among different microservices. When we talk about containers or individual microservices, we always talk about the number of containers that might get out there, the portioning of different microservices, the whole API lifecycle and management. Problems from managing an overall system decrease, but managing individual microservices increase. We have to concentrate on the individual life cycle of individual microservices and how they communicate with each other.
Istio manages this gap. It’s very similar to what was previously known as the service mesh so now, rather than a specific implementation of service mesh, this is a very Google-y implementation of an open-source version of the service mesh, which is based in collaboration with IBM, and I think Lyft as well. They are fostering this open-sourced project and that would be available on Google Cloud Platform. I believe they have already released a 1.0 version of STO on the open source community. It is going to be integrated and launched in the Google Cloud services platform, as well. All of this brings a very powerful capability together – you have Kubernetes which is running on-prem and on the cloud, you have STO which is stitching together all the services that Kubernetes is providing, and all of that is managed through a single cloud services platform interface.
This was a very big announcement because it allows you to run your workloads seamlessly across on-premises or on the cloud. It doesn’t matter where you are, you can run the same containers that you’re running on the cloud and have the same capabilities. This dramatically changes the game in terms of containerization and options, adoption of Kubernetes orchestration across enterprises and across technical sense.
Cloud Build is something that was already present within the GCP environment and now has general availability. One of the key things in terms of Cloud Build was that the previous versions of Cloud Build did not integrate very well with GitHub but now it does integrate with GitHub and it gets your code repositories from there.
Now you can use Cloud Build to do your end tasks. You can also process, including fetching code repositories and doing container builds again, working closely with the GKE engine to start building and deploying your containers.
From a continuous integration perspective, this is actually a very important announcement. It lets you build your container and container artifacts and it integrates with a wide variety of codes in the developer ecosystem and get it deployed in an integrated fashion on GCP.
Previously, you had to do several manual steps, and now all of those steps are not wide because the integration is out of the box.
This is a pretty cool announcement, from my perspective. It prevents a lot of problems that developers used to have in the GCP environment.
Serverless containers on Cloud Functions (early preview)
The topic of serverless containers goes back to our GKE discussion. The foundation for this is based on the Knative initiative that we talked about. Now, essentially, what you can do is focus mostly on your functions and then be able to containerize them and run them, whether it be on-prem or whether it be on the cloud and in an automated way.
Having a one-step deploy to have your functions go automatically into a specific container and then running on whatever platform you choose is actually a pretty big deal. One of the key things to remember is that cloud functions always use some kind of containers underneath in order to execute that, but because it was provided as a managed server, there was very little control or possibility of using different workloads or environments.
Now, having serverless containers allows you to run these different functions and different kinds of container environments. It gives you a little bit more flexibility in terms of what environment your function is running on. It’s still limited but having the option of some customization on your containers which are running these cloud functions is actually pretty phenomenal. This essentially allows for newer ecosystems, newer development stacks and newer technologies to be used.
BigQuery had a very important announcement regarding table clustering. Whenever we talk about BigQuery, customers seemed to be concerned about how performant it would be. When you actually have to use BigQuery in somewhat of a traditional RDBMS manner, there was always this big question.
One of the key things that Google announced was table clustering, which is very similar to traditional database table clustering. One thing to remember is that this is available only on partition tables on BigQuery at the moment. They do allow for the same capabilities that were available on the traditional databases using table clustering on BigQuery. So this definitely means that there are going to be some significant performance improvements.
One of the best practices in BigQuery was to not focus on joins as much as possible and denormalize data as much as possible. But sometimes when you are migrating traditional workloads, traditional environments or even creating data warehouses on BigQuery, there is a need of some level of joins, some level of denormalization, some level of fact tables for lack of a better term.
So in that case, BigQuery’s performance was a little lacking, to be honest, and this is one of the biggest reasons why table clustering was introduced – the whole functionality was not as mature as some of the other traditional database windows are. Having this possibility now really increases the overall performance benefit that traditional queries will have. This is a very big announcement and there is still a ways to go for BigQuery to actually become more performant, in terms of some of these big data warehouses and the ways in which the schema was created.
This definitely has a lot of impact for people who do a lot of schema design on BigQuery. So now, knowing that table clustering is available and how it works, we can make the schemas and the secret queries even more efficient and even more performant. This gives us more options to do higher levels of optimizations of how BigQuery can be leveraged and executed on.
We were waiting for this for a while. We were always wondering if BigQuery would actually do this or not and now that they actually did, we know that there are more paths forward in order to make it more robust and more performant compared to some of the other data warehouse tools out there.
BQML is a very nuanced and niched product announcement. At the same time, I think it is phenomenal because now you can embed your machine learning models within SQL and run it on BigQuery. So, you don’t really need a third-party tool or a third-applicational tool on GCP to run some of your ML models. You can do it in SQL. That is a very big announcement for developers who use a lot of ML tools. Rather than bringing all the data over traditionally, you would use some kind of query your data extraction in order to bring the data, create your models, do your training and then get the results. Now you can all do it BigQuery SQL.
You don’t really have to go through a number of different steps and then leverage BigQuery on engine in order to do the data extraction and data manipulation out there.
Learn more about Pythian’s services for Google Cloud Platform.
Interested in working with Kartick? Schedule a tech call.