I recently joined Chris Presley for his podcast, Cloudscape, to talk about what’s happening in the world of cloud-related matters. I shared the most recent events surrounding Microsoft Azure.
Topics of discussion included:
Ethereum proof-of-authority on Azure
Azure cost forecast API launch
SQL Data Warehouse updates
– Accelerated and flexible restore points
– Intelligent performance insights
Security Center Adaptive Application Controls in GA
Azure management groups now in GA
Azure migrate enhancements:
– Support for reserved instances
– VM series policy
– VM uptime
– Windows 2008 support
Ethereum proof-of-authority on Azure
Ethereum is one of the biggest public network blockchains for smart contract execution. It’s also useable for private or consortium-style use cases. Consortium being, for example, eight companies that want to run their own blockchain and each one is a member of the chain network.
Azure has different templates that you can use to get going pretty quickly. To deploy these blockchain networks, the Ethereum template has used what is known as a “proof-of-work” consensus mechanism. Because they are distributed networks, a blockchain has different algorithms to decide which is the real version of the truth.
The initial Ethereum release of the template used the default Ethereum configuration which uses proof of work to define consensus. Proof of work is a very computational intensive activity and that’s what is called mining. It didn’t make sense to use proof of work in a private or consortium-style blockchain because in those scenarios the network conditions are not adversarial and usually the validating parties are well known. For example, if I have a distributed ledger between different regions of my corporate offices around the world.
In that case, it makes no sense to have proof of work as consensus and have all of the regions doing really computationally intensive calculations in order to arrive a consensus. I’m not going to have suddenly some bad actor, inside the network that is unknown, in the public blockchain, Ethereum has a monetary value so there’s an incentive for people to do that. In a private setting it is enough to stake your reputation, using the Proof of Authority model.
Now the Azure Ethereum template has the option to do exactly this type of model. You say, “These accounts are allowed to validate transactions” and that’s all you need. Then what happens is it’s less computationally intensive, so you need smaller VMs to be transactional validators. At the same time, it increases the throughput of your blockchain because you’re not doing all those computational puzzles that we call mining. Note that you haven’t made your chain less secure, as long as you know who the validators are (this piece is not feasible in a public network that anyone can join).
This is going to just make it easier for people to deploy production blockchains using Ethereum in private or partnership-style scenarios. Then, for example, if they are all a part of the same supply chain and want to run a distributed ledger, then each one will have a validator and won’t waste resources to do proof of work.
Azure cost forecast API launch
This is what I call a quality-of-life type of improvement which we see month in and month out, just small little bits that come in to round up a story or make life easier for somebody. This is exactly that.
Azure already has a cost API. You can get your own cost numbers through their API endpoint, you can build your own reports, you can consume it in your own application and whatnot. The new change that they have published now is a forecasting feature inside that API. You don’t have to run your own forecasts, now you can just use the same API. You can put parameters and say: “I want to see a daily forecast or a monthly forecast.” The service will give you an upper and a lower boundary of what the forecast is at a 95% degree of confidence. I think it’s all calculated based on statistical analysis of your subscription.
While this is not revolutionary, it is pretty handy. If you want to build some reporting or if you are a developer and you’re trying to build some sort of custom cloud cost solution, they just made your life a little bit easier by adding the forecasting capabilities straight into the API.
SQL Data Warehouse updates
Accelerated and flexible restore points
There have been a couple of updates for SQL Data Warehouse.
First is accelerated and flexible restore points. They are adding more options to what you can restore. Before, it used to be you could only pick restore points that were from the pool that done once every 24 hours. Now you can select restore points that are from a pool that are done once every eight hours.
Potentially, you have now triple the amount of restore points that you had before. But the cool bit here really is that your restore time is 20 minutes or less regardless of data warehouse size. Regardless if it’s 10 terabytes or it’s 10 petabytes, it always takes 20 minutes or less. The restore is not a size of data operation, it is just a flat amount always below that 20-minute threshold. You don’t have to worry about how long it is going to take.
It opens up other scenarios besides just regular recovery and RTO, for example, using DW restores for normal development and testing, even as part of an automated CI/CD pipeline.
Intelligent performance insights
The other SQL DW update is the introduction of intelligent performance insights into the data warehouse experience in the portal. This new feature analyzes some stats and situations inside your data warehouse and suggests improvements. Right now, there are only the basics but it is a starting point. It alerts you if you have some skewed distributions in your data and that your stats might need some updating.
At least they have started down that path and maybe we’ll see in the future if they provide bigger or better recommendations as they build out that engine further.
Security Center Adaptive Application Controls in general availability
The Security Center feature is called Adaptive Application Controls. You can give it access to the Security Center to analyze your VMs and it will make an inventory of the applications that are running inside of them. You can set it into either audit or enforce mode.
In audit mode, if someone decides to install something that is not on the list of the allowed applications, it will flag it. If you feel really comfortable with the tool, you can actually set to enforce. This means that when the security center detects that the VM is about to execute something that is not whitelisted, it blocks the execution.
For a production environment that has really tight security requirements and must stay under compliance at all times, I can definitely see how this could get widespread adoption.
Azure Management Groups now in general availability
Azure Management Groups is a new feature to make it easy for really big Azure users and really big clients to manage their whole Azure tenant. With management groups, you can organize different subscriptions into groups and then you can push policies and reports on to the subscriptions inside those groups.
For example, you could have a subscription for development and testing of your main revenue generating application. And then for security and segregation of duties, you have a totally separate subscription for production. So you can track your development costs separately from your production cost. For security, you wouldn’t have your admin of your dev subscription be the admin of the production one.
Or maybe you decide to organize your resources based on whether they are part of the same product application. They share an overall budget, maybe they share the people who are allowed to work on them, or maybe they share the regions that they are allowed to be deployed on. You could put them inside one management group and then set those policies at the level of the management group and they would trickle down to the individual subscriptions.
It’s a feature to make it a lot easier to manage Azure at scale. For individuals, they are just playing around in their house and have only one subscription. This is not going to be a feature for them. This is for enterprise-level adoption.
Azure Migrate enhancementsf
Azure Migrate is the service that lets you easily migrate your on-premise virtual machine state to Azure. The Microsoft team is working on having an agent for physical machines but it’s not here yet. What this service does is provide an analysis of all your VM’s on premises and then it gives you suggestions and estimations as to what that would look like if you were to move to Azure.
The service is actually very neat. They give you a VM appliance that you download off the Azure website and run on your ESX. It talks to the ESX server and collects all of the different configuration metrics and the performance metrics of all the machines running in that hypervisor. Then it provides an estimate to move into Azure and if you do want to move into Azure, it also leads you into how you can install the site recovery agent.
Support for reserved instances
This month they have added support for reserved instances. You know your workload best, you know your compliance, your requirements and all of these things right? So you are allowed to customize some of the ways that the tool generates the estimates to make it more accurate as to what it’s going to be in the end.
For example, support for reserved instances means that I can say, “Well I know that these VMs are 24/7. I know that this is not going anywhere. So in the estimate, give me the prices if I reserve them for three years” instead of having the regular pay-as-you go pricing.
VM series policy & VM uptime
There is a family of general purpose VMs. There is a family of GPU VM’s, there is a family of burstable VM’s. So it also allows you to say, “Well I know that this ESX hypervisor is jam-packed with developer instances. So give me estimates here if all these developer instances turn into burstable VMs.”. These are a lot cheaper so you can tailor the estimate based on the specific family of VM you want to leverage.
So again, the whole idea is that you can tweak your migrate experience with the tool. It is based on your own knowledge of your own premises state to get an accurate estimate of the cost and effort of migrating.
Windows 2008 support
The last enhancement is they have added Windows 2008 support. So you can transparently use the tool to migrate Windows 2008 which at this point is 10 years old. They even went the extra mile and are migrating Windows 2008 32 bits from on-premises into Azure. I know somebody out there is still running Windows 2008 32 bits as a production server, now is your time!
This was Part 2 of the Microsoft Azure topics we discussed during the podcast, Chris also welcomed Greg Baker (Amazon Web Services expert) who discussed topics related to his expertise.
Listen to the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.