The latest news from Google Cloud Platform

Posted in: Cloud, Google Cloud Platform, Technical Track

I joined Chris Presley in an episode of one of his new podcasts, the Cloudscape Podcast, to share the latest news taking place around Google Cloud Platform (GCP).

Some of the highlights of our discussion were:

  • Google managed services – SAP HANA
  • New Google Storage Services
  • Google’s Partner Interconnect
  • Kubernetes updates
  • Sole-Tenant Nodes on Google Compute Engine
  • Dataflow’s new Streaming Engine

 

Google Managed Services – SAP HANA

Google announced a big partnership with SAP at Google Next 2017, and have since put in a great effort to make Google Cloud more friendly for SAP-type workloads.

We’ve seen some things come out during the past year, but now I feel like there is really a lot of investment in that space. There have been rumored plans for a  managed SAP service that they’re planning—a few articles in the news.

The interesting thing is that it’s not just about the managed services. When you read more about it, you will see that Google will be certifying specific VMs for SAP-type workloads for SAP HANA-type workloads. You’re going to be getting the new integrations with G Suite. Google has even rolled out the new ultramem-160 machine, which has approximately four terabytes of ram and 160 virtual CPU cores.

In the future, I see them really strengthening their Google SAP alliance. This is just one example of how Google has been investing in strong technology partnerships.

New Google Storage Services

Google is rolling out a shared file system storage service, similar to Amazon’s EFS, called Cloud Filestore. So we’re going to have a managed service on GCP now that’s not based on running our own GCE VMs.

Right now the service is still in beta, and it hasn’t officially been launched. They will be rolling it out the beta over the next few weeks.

I’ve already signed up for it, so hopefully I’ll get to play with it soon. It does have some limitations, such as a maximum of 64 terabytes. I imagine they’ll probably charge you on a gigabytes-per-month rate, but it is a very high-performing NFS. They’re claiming 700 megabytes per second at 30,000 IOPS for the premium tier, which is impressive.

It’s only going to be in a few regions at first, and it is only coming in an NFSv3 “flavor” for now. So it’s not quite primed for the Windows world yet, but knowing Google, they will roll that out sometime shortly after.

On another note, Google has finally moved their transfer appliance in GA, but only in the US. If you’ve heard of this before, which you probably have if you’re an enterprise with a lot of data, it’s pretty much an appliance depth, it’s shipped your datacenter, you can fit it in a 19-inch rack, and it comes in one U or four U, I believe. Capacities are 100 terabytes or 480 terabytes.

Apparently, Google is launching a new region in Los Angeles where they’re really trying to target the media and film entertainment industry. I believe this is the first cloud region from any service provider that is being rolled out in LA. I understand they want to get closer to that market and being within the city can really give them that edge. We’ll get much lower latency and really high performance massive storage.

Google’s Partner Interconnect

Partner interconnect was also announced a few months ago. Again, this is Google working with their tech partners, and a lot of the service providers, actually the top suppliers in North America, Europe, and some in Asia, as well. A lot of times you’d speak to a customer and the customer wouldn’t have the ability to connect directly to GCP, mainly because they’re too far from a point of presence or it’s just not technically feasible for them.

But they may have something similar to an MPLS-based WAN that’s connecting all their different branch offices together, in a hub and spoke-type topology, which is where the Partner Interconnects come in. Instead of having to rearrange your network topologies, especially your WAN network topology, and if you have all your contracts already in place with a top-tier service provider such as Verizon, you can simply sign up for the service and tell them, “I want to extend my network into GCP” and into a specific region.

You give them the required information and they set it up for you. Before you know it, you have a direct high bandwidth with low-latency connection into the cloud. The nice thing about it is that you’re relying on a service provider’s network. You get all the bells and whistles of HA, the liability and resilience of an MPLS network, and you’re getting direct connectivity to your cloud estate.

There’s an initial setup fee, and there’s a recurring monthly fee that you have to pay to Google and, depending on your service provider and how they have things set up, there may be another fee in there.

Then there’s the per-gigabyte fees. It’s not the most cost-efficient. I think Direct Interconnect is a little cheaper. Again, it really depends which POP you’re going through, but it’s definitely easier for you if you already have that infrastructure in place.

Kubernetes updates from Google

The latest from Google and Kubernetes is regional clusters going into GA. So you can now roll out multi-master clusters and in a single region. And so your master nodes will be living in every single zone within that region. You just tell GKE, “I want a regional cluster in US-east4,” for example. The service will spread the control-plane as well as your worker nodes across the whole region.

You get the resiliency of single zone failures, and you don’t get any downtime during master upgrades, which is amazing because you have that federation across multiple zones. One zone’s master can go down fully or the actual pair can go down fully, and you still have the other two zones running so you get rolling upgrades, with continuous uptime.

It’s a great combination that goes with last month’s announcement of regional disks. Now if you have persistent volumes that you want to have attached to your cluster, these will be available on a regional scale. I think that Google has definitely hit it out of the park with this one in terms of HA and Kubernetes HA.

Sole-Tenant Nodes on Google Compute Engine

This one is probably something that you have seen around other cloud providers before. One thing I have always liked about Google is that their infrastructure is unique. They pretty much build their own machines and their own racks and now customers can rent one of those physical servers, which is amazing. That’s pretty much what sole-tenant nodes are – you rent a node which is basically a server in one of Google’s data centers.

You pay only for what you use, so there is pay-per-second, one-minute minimum charge – use it for an hour and then decommission it as you please. Once you have it, or once you can get a group of nodes, you can then start running your own VMs on them.

The one thing that I do like about this is that there has been some licensing issues around Google and some providers. Some vendors still don’t support Google Cloud as a platform. But if you say I’m running on an actual machine, on physical hardware, does that license limitation still apply?

This has been an interesting topic of conversation. It might actually be the workaround for some of those workloads.

Dataflow’s new Streaming Engine

Cloud Dataflow is pretty unique to Google. I don’t think anyone else is really running it as a data processing engine based on the Apache Beam API  other than Google . The Apache Beam API is open-source but I don’t see any other service providers adopting it today. So Google has been the the champion in getting it out there and getting it architected for really massive workloads.

One of the most recent enhancements they are working on now is this streaming engine. Dataflow was originally developed to consolidate your batch and streaming workloads and now, Google is re-architecting it under the hood to make it a lot more efficient. The way it ran before is they would have an internal schedule that would spin up VMs in the background. And these VMs would have persistent disks connected to them and would have to sync certain data subsets, and have a certain understanding of what the current state of the running job was. If you were running a streaming job, you needed to know the state of your window, what data had been processed, etc. so you got that distributed computing framework going. Now that was definitely slow.

What they have done now is they’ve moved that storage part out of the VMs into a back-end service which is pretty much invisible to your machines.  So now your machines literally just have to spin up, they have access to that back-end service, they do all their processing and they scale up and scale down based on that. So they have become a lot more ephemeral, their auto scaling is a lot more reactive which is great and you can see it in the published benchmarks.

If you go to the Google Cloud Big Data blog, you can see some metrics they’ve run and the benchmarks and how well it actually associates with incoming flows of data as it needs to process them.

The other nice thing is that you don’t need VMs that are as big anymore. So, you can run more VMs and smaller VMs which is just that much more agile for your workloads.

This is currently experimentally available. You can enable it as an experimental pipeline parameter when you are deploying a job and they also say that you don’t need to redeploy your pipelines when you’re applying service updates. I still have to play with this to understand exactly how that would work.

The one gotcha though, is you do get billed for the amount of streaming data that you process. Before, it was based on the number of VMs that you were running over time. Now, because there is a back-end service, Google needs to charge you something. So, you have that added cost but then again, you’re probably going to be using fewer VMs and smaller VMs with much smaller disks. So, I think cost-wise it will probably balance out.

It is a great architectural shift in the operating model of Dataflow and it is nice to see Google creating the very agile and optimal workflows toward them.

This was a summary of the Google Cloud Platform topics we discussed during the podcast, Chris also welcomed Greg Baker (Amazon Web Services), and Warner Chaves (Microsoft Azure) who also discussed topics related to their expertise.

Click here to hear the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.

email

Interested in working with John? Schedule a tech call.

About the Author

Senior Solutions Architect
A digital architect who designs solutions that span on-premises, cloud, and hybrid architectures. Enabling businesses and organizations to benefit from solid foundations based on in-depth knowledge of the field, providing them with scalability and efficiency as to allow businesses to focus on the tasks they do best.

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *