I recently joined Chris Presley for Episode 5 of his podcast Cloudscape to talk about what’s happening in the world of cloud. My focus was the most recent events surrounding Google Cloud Platform (GCP).
Some of the highlights of our discussion included:
- New Google regions
BigQuery had a big development recently with the availability of numeric data types.
It’s important for financial institutions to have proper numeric data types so they can aggregate calculations accurately, and that was missing in BigQuery. It is now available in beta. This is a big deal for financial institutions who do a lot of analytics in the aggregation of money types on BigQuery. It will drive a lot of customers to adopt BigQuery, especially customers who previously had convoluted ways of dealing with numeric data types for accurate currency reporting mechanisms as now they will have a direct way to handle the numeric data.
BigTable had an important announcement with instance level IAM-obtained access management to general availability. This was part of Google’s long-term plan, it was in beta for months. A lot of organizations wanted to have this because they would be able to have more granular security control of instances within BigTable and manage it accordingly.
Kubernetes had several announcements recently related to the release of Kubernetes 1.10 and the Google Kubernetes Engine.
One of the most notable features of this release is the availability of shared VPCs. Previously, Kubernetes clusters would be running in their own clusters. To connect all these networks together, you would have to connect every cluster to each other which was very difficult, not very efficient, and needed a lot of resource overhead. Now, there is the availability of shared VPCs. Although there are some performance issues, this makes it very easy and very convenient for large organizations deploying production for Kubernetes workloads with several teams and multiple tenants. They can share this physical resource while still maintaining logical separation.
It used to be difficult to communicate between Kubernetes clusters and between one projects. Now, you can compartmentalize the Kubernetes engine and the Kubernetes clusters into separate projects, sharing off those common resources across multiple teams.
Security was also a big concern because everything was done at the organizational level and organizational administrators did not have a lot of control over the individual flag of origins specific to projects. Now they can separate access to the projects and data in a much more granular fashion, providing more reliability and an audit trail in terms of which security permissions are granted to which users and services.
A key aspect of this is billing. With separate projects and separate Kubernetes engine clusters, you can isolate their resource usage and understand what the billing is going to be for each individual project team and each individual tenant workload. You can then budget accordingly.
Several companies have different applications which are in specific and isolated workloads for their customers, and they need to have the multi-tenant isolation for sensitive data. Rather than going through a very convoluted network isolation architecture, they can use the shared VPC concept and isolate workloads while at the same time sharing network resources.
This release also provides the availability for regional persistent disks and regional clusters for higher availability. They provide a durable network-attached storage for synchronous replication of data between two zones in the same region which was not available earlier. You had to do some hacking to do it effectively earlier, but it is available now, and you can configure it.
One of the other important things that I thought would be very important to mention is the availability of custom boot disks. So, earlier you could only use a standard persistent disk for your Kubernetes clusters. But now you can also choose to use SSD & custom boot disks. This would increase performance, especially for high performance and high throughput workloads. I know a lot of use cases out there that want to use a very high throughput Kafka and similar environments on Kubernetes clusters. This is ideal for running high throughput workloads on Kubernetes.
Stackdriver also had a couple of important updates. Stackdriver is Google’s monitoring mechanism and one of the things they have been trying to do is improve their usability. Just recently, they launched a couple of usability enhancements.
One of the key things was the ability to manage alerting policies in the monitoring API. This enables users to create a custom alerting condition for resources that are being monitored by Stackdriver based on metadata which has been assigned by conditions. This improves the flexibility and the metric guides that you can derive and report on.
Stackdriver earlier had predefined custom templates which could report and capture the monitoring metrics. But now, the availability of a custom monitoring template makes it much easier to create your own alerting policies and better understand the behavior of your application.
Another update is a beta version of a new alerting condition configuration in the Stackdriver UI. This is more of a UI feature rather than an API feature that can be very regulated and consumed. It allows you to find different alerting conditions more precisely and look into the metadata of all the log and all the reporting, enabling you to define a broader set of conditions. It really helps because it is a more powerful, complete method of identifying time series and specific aggregations, especially on log data. It enables you to be much more efficient, accurately do alerting on aggregation of custom metrics, and log these metrics. It gives you the ability to filter metadata to alert on very specific Kubernetes resources, for example.
The final thing that I wanted to mention is that this also allows you to edit on new metric threshold conditions that were created by the API.
These updates give users of Stackdriver a lot more flexibility and they are conducive to those specific application workloads and the application behavior you have on GCP.
New Google regions
Lastly, I want to mention that the Singapore region opened recently. This is significant because Google always wanted more presence in the Asia-Pacific region. There are a lot of Google customers who are very excited to have the region open in their own backyard. A new region has also been announced in Zurich, Switzerland which is scheduled to open in 2019.
This was a summary of the Google Cloud Platform topics we discussed during the podcast. Chris also welcomed Greg Baker (Amazon Web Services) and Warner Chaves (Microsoft Azure) who also discussed topics related to their expertise.
Click here to hear the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.
Interested in working with Kartick? Schedule a tech call.