Skip to content

Insight and analysis of technology and business strategy

News and updates from Amazon Web Services

I recently joined Chris Presley for his podcast, Cloudscape , to talk about what’s happening in the world of cloud-related matters. My focus was to share the most recent events surrounding Amazon Web Services (AWS). Topics of discussion included: Amazon Elastic Container Service for Kubernetes AWS Lambda adds Amazon Simple Queue Service to supported event sources Amazon Linux WorkSpaces Amazon EC2 update – additional instance types, Nitro System and CPU options Redis 4.0 compatibility in Amazon ElastiCache Amazon SageMaker automatic model tuning: using machine learning for machine learning   Amazon Elastic Container Service for Kubernetes Amazon EKS is finally generally available. People have been banging on the doors and crying for this, and it’s finally here. For people who want to get out of ECS and use the full power of Kubernetes, but hosted by Amazon, Amazon now has it. We’ve been waiting to see when it was going to be released, and it took Amazon a long time to get this off the ground. The good news is that it comes with all the goodness you’d expect, multi-AZ integration with IM, load balancer support, etc. But it’s only available in US East and US West. If you live in the rest of the world, sorry. It would have been great if they had said it will available in all regions on a certain date, but they didn’t do that. It’s a little sad, but we’re getting to the point where you can run hosted Kubernetes on-premise and switch workloads between the two. They are charging 20 cents an hour for the EKS-controlled plan and you are paying for any easy queue, EBS or load balancing resources that you are using as a result of your Kubernetes. It should be fairly predictable. The API server and the ETCd server components of Kubernetes are running in three separate AZs to provide HA. I don’t think you could select your availability zones, I think the magic just happens behind the scene. Amazon tends to offer things that are pretty mature and ready to go. I think they have the first-mover advantage on a lot of things, but not necessarily Kubernetes. So that is just sort of baked in. I know sometimes experiences with other vendors aren’t as smooth because they want to give it out and get people using it. That’s okay. A lot of people don’t mind exploring and finding all the cracks and the workarounds. But this seems to be a good solution. The feedback from it has been positive and people are excited to get away from ECS and do a more standardized container orchestration space. The time is quickly arriving where it will be very smooth to run a production-level Kubernetes. You could outsource the headache of running Kubernetes to someone who has experience and can do it for you with ease, and then you can focus on delivering value on top of those platforms. That’s the dream and that’s where we’re headed, so I’m excited about this. I think Kubernetes has a bright future. AWS Lambda adds Amazon Simple Queue Service to supported event sources SQS can be used as a trigger for Amazon Lambda now, which I think is great. Before, you had to use a queue but then you had to have a scheduled event that would go out from the Lambda to check the queue, trigger the Lambda, the Lambda would read the queue. Now, it’s all integrated nicely. You can tell messages to queue and it will pick them up and it will scale seamlessly, which is really handy. Amazon Linux WorkSpaces This is a cool addition. For some organizations, virtual desktops are a huge value-add to a distributed team. They don’t have to manage a fleet of machines and the audited security. For the longest time, it’s been Windows-only and for a lot of development teams, that just doesn’t work. Now Amazon Linux 2 can be used in workspaces. You can save some money on licensing costs because Linux is free.. They also package up Amazon Linux 2 as Docker, VMware, Hyper V, KVM and VirtualBox. You can use it in other areas as well, and your desktop will not be mismatched if you are doing developer workstations or development VMs. So, if you are a Linux user, you can now use a virtual desktop and an Amazon workspace and take your work with you wherever you go. I would assume the environment is pretty flexible and you can bring in your automation set. It would be fun to play with and to test the limits of the workspace. I know that a lot of developers really like the idea of an Amazon workspace because it’s pretty cost effective, as long as you’re not keeping it up 24/7. The cost is around $10 per month. If you are a digital nomad and you’re traveling, you don’t want to have to worry about having a powerful laptop on which to do your builds. Put it on the cloud, let that handle the compute power, and then you just essentially need a VM client to get to that workspace. If something happens, it doesn’t matter because all your work is in the cloud. I think the workspace is attractive because of some of the controls that it provides for larger organizations and the automations you can bake in, the integration that’s the directory services, and the ability to manage it from a central location. We all know that there have been competing products for other vendors that have been very popular. Amazon’s is also pretty popular, they have their WorkDocs for working with documents. This is just the user side of the same ecosystem using the same technology behind the scenes. It’s such a great option for people who want it. Amazon EC2 update – additional instance types, Nitro System and CPU options I’ve touched on these before, but some of the new instance types that are built on nitro and have some of the local NVM storage are now rolling out the C5d for compute intensive workloads. If you’re looking at adding any new instances, look at the C5ds, general purpose, the M5ds, that local NVM storage and then the Bare Metal and the IR Metals. They’re available now in multiple regions. That’s going to be great for those workloads that require Bare Metal. Just keep an eye out for those and if you have older instances running, take a look at upgrading where you can because there are a lot of great benefits. Redis 4.0 compatibility in Amazon ElastiCache Redis 4.0 compatibility in ElastiCache in Amazon is available. It’s running 4.0.10 and it brings with it a couple of great features. The one I’m most interested in is the least frequently used: cache eviction policy. It just adds to the cache eviction policies for when you’re ejecting data auto out of Redis. There’s also some asynchronous operations that can be called, some active memory defragmentation that can be run for workloads that can use that, as well as additional memory commands for pulling different metrics and really getting the stats and the health of the Redis instance. It’s great for people who are looking for these features and haven’t been able to use ElastiCache because they need them. Now you have them and you can get back on ElastiCache or start using it if that’s your jam. Amazon SageMaker automatic model tuning: using machine learning for machine learning Amazon SageMaker is a tool for doing machine learning and modeling in the machine learning space. I am going to go out on a limb here: I am not a machine learning expert, but SageMaker seems pretty neat and I thought people should know about this. There are different kinds of parameters - regular parameters and hyper parameters. When I saw hyper parameters, I said, “Wow that’s a great name. What the heck is a hyper parameter?” So I dug a little deeper, and discovered it is a parameter that is part of a model, it is something that can be inferred from data as you train a model. There are numbers that can be changed and then what it boils down to is that by learning and running the model, you can actually find the appropriate values for these parameters. However, there are parameters that live outside of the model such as what is the appropriate number of times to run and train. It is ridiculous to say that to define the optimal outcome those cannot be inferred from data. So that is referred to as a hyper parameter and those have been built based on anecdotes, past experience and gut feeling. This is a very manual process. People have to go out and find these values. They have to find models that are similar to theirs, steal the values from them to use them and really be able to train their models in the best way. SageMaker now can do this automatically. It will go through and find the right values where there were hyper parameters. They will basically build a meta model to go out and find the optimal values. This seems to me like it’s going to be a super-convenient thing for people who are spending all this time tuning hyper parameters, trying to find things that match. I think it’s awesome. This was a summary of the AWS topics we discussed during the podcast, Chris also welcomed John Laham (Google Cloud Platform), and Warner Chaves (Microsoft Azure) who also discussed topics related to their expertise. Click here to hear the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner