Topics of discussion included:
- New T3 instances – burstable, cost-effective performance
- Aurora Serverless MySQL generally available
- New – provisioned throughput for Amazon Elastic File System (EFS)
- Amazon Lightsail update – more instance sizes and price reductions
- Amazon EKS supports GPU-enabled EC2 instances
- New Amazon EKS-optimized AMI and CloudFormation template for worker node provisioning
- Amazon ECS now supports Docker volumes and volume plugins
- Amazon VPC flow logs can now be delivered to S3
- Lambda@Edge now provides access to the request body for HTTP POST/PUT processing
New T3 instances – burstable, cost-effective performance
There are new T3 Instances that provide a bursting feature. If you don’t need to pay for a lot of compute all the time, you can focus on giving the amount of memory that you want and these T3 Instances have a baseline where they might offer 20, 30 or 40% of the maximum CPU for applications. Then that would be what you pay for on a monthly basis. If you need extra CPU, you can burst up to handle that demand.
The way the billing works is if your average is below that baseline, your monthly cost covers all your bursting. If you go above, then you start paying for it, about five cents per VCPU hour.
Above that, these are all hardware VMs, HVMs and they’re all built on the new nitro hypervisor solution. They use the newer AMIs, so you have to have the elastic network adapter (ENA), and you need to deploy in a virtual private cloud. But other than that, these are available today. It’s a really great solution for people who don’t need that CPU all the time, they just need to be able to handle quick bursts and might have other needs or constraints.
Aurora Serverless MySQL generally available
Ever since Lambda was released in AWS, people have been asking, “When am I going to get a serverless database? I don’t have to pay for my application to be running all the time, so why do I have to pay for my data to be available all the time?”
Amazon has now released the Aurora Serverless MySQL solution into general availability. This is a pretty unique solution where your data is not available all the time and you’re only paying for it to be up when you need it. The solution will scale up and scale down on demand, with some caveats. It presents a unique solution for use cases that don’t require data to be immediately available or where there might be intermittent demand.
Think about a testing database where tests might be run once a week, once a month, or even once a quarter, such as in the healthcare industry, where they have much longer release cycles and compliance cycles. Rather than paying for the support database that just sits there idle for weeks or months, you could store it in Serverless Aurora and when you need it, it will spin up. But there are penalties. When it’s cold and you need to start it up, that first request can take up to 25 seconds which is a pretty significant impact, but once it’s warm, that penalty goes away and you’re able to access it just like you would any other database.
The way it works is that you set a lower bound and upper bound for these compute units, called “Aurora Capacity Units.” Those are built per second with a five-minute minimum. The cluster will autoscale between that maximum and minimum to meet the performance level that you want. You set the bounds, it works between them and that’s what determines your performance and your cost. There is a threshold of 1.5 minutes, so after one and a half minutes of sustained usage, it will scale up and then the scale down is five minutes. There’s a little bit of a delay, it’s not instantaneous. You need to have somewhat of a sustained usage to trigger these scaling events.
If you have a small development team and you have a database backing such as an artifact server, this is a really neat solution – or maybe a configuration management solution where you have some of your parameters stored in a database and you only need to access that when you’re doing a run.
For me, 25 seconds is not too bad of a penalty when you’re not actually auto-scaling, and you’re just building the images. If you’re able to save money by doing it this way, it’s worth taking a look.
New – provisioned throughput for Amazon Elastic File System (EFS)
As we all know, the Elastic File System is for storing files. In the past, to go faster or to get more throughput, you had to have more data. So if you moved from a terabyte to 10 terabytes, you’d go from 50 megabytes a second continuous to 500. But there are some workflows where that doesn’t work. You need the throughput, but you don’t have all of that data.
Amazon has announced that you can specify and actually provision a level of throughput up to one gigabyte a second for each of your file systems. When you create a file system, you set the level of provision that you want. You could increase it whenever you want, but they have a cooldown on dialing it back. If you want to cool down, it is 24 hours according to their official blog post.
You can also move between provision and bursting. So, you could provide the throughput for your application onto the file system and then switch back and forth. I think this is a great addition. In the past, you were limited to just your general purpose, which should be really good for latency-sensitive applications. Then they have the Max IO, which you would accept the higher level of latency, but you would really be able to push that IO.
I have worked with clients in the past who have moved from on-premises and migrated to the cloud. They’ve taken some contents from a file storage and stuck it in EFS and instantly said, “This server performs really, really poorly.” When we stick some monitoring on there, we see that the service is performing fine from a compute standpoint but they have blocked that IO. Their CPU is sitting idle while they are waiting on IO. When we increase the amount of space that they provision, they get more IO out of that file system. Boom, problem solved. It’s amazing.
So again, providing this option is really great. This is just another way to simplify that and make it easy for them to access that without necessarily having to have a huge amount of capacity that they don’t need.
Amazon Lightsail update – more instance sizes and price reductions
Lightsail, of course, is a server-in-a-box solution for a virtual private server. It competes in this space against several other providers, and is great for people who just want to hit a button and get a WordPress server, or hit a button and get a node development server so they can deploy their node application.
Amazon has just added two new instance types to provide more options. They have DPSs now that have 16 gigs of memory with four VCPUs and 32 gigs of memory in AVCPUs. They have updated the pricing and it is great for what you get. Previously, if you wanted eight gigs of memory and you wanted to access the other four CPUs, you were paying $80 a month. Now you are getting that at $40 a month. The same thing for eight gigs on the Windows. You were paying $100 now you are getting it for $70. So this is great, it’s giving access to high memory.
You know, 32 gigs of memory for a node server or a similar server is great and I love seeing the prices dropping down. I think these are really cool solutions for people who have pure development and they want to release the applications. They are provided with a path to consume other Amazon resources if and when they need to scale.
You might be able to go a place like the node and get in a solution there and get a Linux server on the cheap. But if you ever want to migrate off of the node because suddenly your use case has changed, get onto Amazon to rebuild your solution. It is really cool that Amazon gives you a solution to compete with that right in Amazon. Now you’ve got a clear path forward with the other Amazon resources, which we know are dominant in the industry, without having to go through a migration headache.
So, the new instance types and new options are a clear win for consumers.
Amazon EKS supports GPU-enabled EC2 instances
EKS now supports running containers on GPU-enabled ECT instances. This means that if you have workloads like machine learning or anything where you may be transcoding or anything that you need a GPU to crunch on, now you can do it in EKS. And there is an AMI that is custom-built to help that comes already baked in with the GPU drivers, which is very convenient.
So if you want the flexibility of containers, if you need to be able to move your workloads around or if you want to scale in Kubernetes, whatever the reasoning it is, I think containerization is a great option and now there is a convenient option for people who want to do GPU processing in EKS in Amazon.
New Amazon EKS-optimized AMI and CloudFormation template for worker node provisioning
There is a new optimized AMI for EKS or deploying worker nodes. This is an EKS that actually connects to your Kubernetes cluster and provides CPU upon which you deploy your containers. This is an improvement for how you expand and add more worker nodes to your cluster to add more capacity. Previously, the user data that was required to configure the AMI was baked into the cloud formation templates, so they were very tightly coupled. The result was if you wanted to use something like a terraform, or something outside of a cloud formation to manage this, it was more of a headache.
Amazon has decoupled those so you have to use the new AMI and the new cloud formation template together. You can’t use the old one, but it removes some of that scripting from the cloud formation template. Now you could execute it from something like a Terraform.
So if you have Terraform for everything else, you don’t have to have a special workflow to manage it in a preferred way in EKS. It is just a nice quality of life thing, a nice little improvement. I am sure there was some negative feedback from the community that led to this. If you haven’t jumped into EKS because this seemed like it is a headache, or you are just looking for the best way to do this as you get into it brand new, just know that there is a new preferred way of doing it that is worth a quick search to make sure that you’re on board.
Amazon ECS now supports Docker volumes and volume plugins
ECS now supports Docker volumes and volume plugins. Before, you’d have to use a custom script to go in and configure that. Now you can do it all much more simply using the native tools that are provided by AWS. So again, if you were using ECS and you have a custom solution for this or you were looking at ECS versus EKS and if you didn’t want to go down the ECS road because you needed custom volumes and you didn’t want to manage those, it is just a little bit easier to manage those now. I think this is a pretty key feature for ECS for people who don’t want the weight or don’t need the weight of Kubernetes. ECS remains a viable option in that space.
Amazon VPC flow logs can now be delivered to S3
VPC Flow Logs can now go to S3 where all of the convenience and the flexibility and elasticity in S3 can be enabled on those logs. If you have always wanted it or you use it, it’s just a great option to have now. It’s as simple as that, it’s just a nice little vision.
Lambda@Edge now provides access to the request body for HTTP POST/PUT processing
The Lambda@Edge is a neat CVN-type feature that Amazon offers to replicate your Lambda functions across the globe with the idea being that you are minimizing the latency between when a request is made and when a response can be performed by putting the compute that executes those functions geographically closer to your customers. It creates a better experience for customers across the world and it gives them the best possible experience you can.
What Amazon has done is exposed the request body to Lambda@Edge. Previously, it wasn’t available. Now, at no additional cost, you are just given access to more data from the HTTP request. This means that you can do more with Lambda@Edge than you could before. For anyone who is doing an HTTP post that’s delivering a payload, you can now respond to that and handle that right on the edge, delivering low latency solutions without having to go into a deeper infrastructure to handle it.
So don’t miss it if you use Lambda@Edge. You can take a lot of those things that are being processed deep in your infrastructure and move it to the edge, and continue focusing on delivering value and quick responses.
This was a summary of the AWS topics we discussed during the podcast. Listen to the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.
Interested in working with Greg? Schedule a tech call.