Topics of discussion included:
- Azure Virtual WAN in preview
- Azure Firewall in preview
- Azure DNS SLA updated to 100 percent
- Azure File Sync GA
Azure Virtual WAN in preview
I think one trend we are seeing is cloud providers coming up with more ways to monetize the connections and infrastructure they have built. What we have here is Azure is offering a service where you can use Azure’s backbone as a WAN routing service.
Let’s say there are two offices connected through a WAN that goes through the internet. The Azure Virtual WAN allows you to not only have faster speed, but it also allows you to have a lot more security in your link because you are not going to be connecting through many routers; you are going to be connected to Microsoft, then travel through the Microsoft Azure backbone and then into your other regional office. This will complete your wide area and network.
If you think about it, it makes a lot of sense. They probably figured, “We have all of this unused capacity and we are building all of this crazy super high bandwidth connections between all of these regions – why don’t we offer some of it for these other use cases?”
The other thing to think about is that it may be better in our brave new regulatory compliance world to have your wide area network connections going through parties like Microsoft which have certifications and security policies which are more likely to be better received by compliance auditors and regulators.
Azure Firewall in preview
The announcement here is that Azure is going to have a brand-new service, built as a next generation firewall. That means a couple of things:
First, unlike the previous firewall where you must reserve your capacity in case of high usage or traffic coming in to your website, this new firewall system is going to be elastically scalable. You won’t need to reserve, the service will scale up or scale down depending on how much you are using. So that’s one part of the modernization of the Azure Firewall.
Second, they are going to make it a lot easier to handle and manage all these different firewall rules.
If you are client that has a lot of different virtual networks, it is hard to figure out all the different rules that you allow, every single VNET, and all the different things that you shouldn’t allow. If you want to do global policy changes, it’s a nightmare because you basically have to script it out on your own. They’re changing this so that you are going to be able to set up rules that apply to all the firewalls on the subscription or even across multiple subscriptions. That’s really neat.
They’re also going to augment what you can filter through the firewall so it’s not just going to be IPs and ports. Now you’re going to have support to filter through fully qualified domain names, to filter through applications. Also, the firewall is going to have what they call static network translation.
You’re going to be able to have the resources that are behind the firewall, all exposed just one IP outside of the firewall. For other resources, it will be easy to say, “Okay, I will allow everything from this virtual network,” because the front-end firewall will always expose the same IP and it will just expose different ports for the different devices inside the firewall.
It will make it a lot easier for people who want to communicate to that virtual network to say, “Allow everything and just the IP of the front facing firewall.”
Azure DNS SLA updated to 100 percent
This is an interesting one. It’s not a huge change but it is interesting to see how things change in the cloud and how competitive forces pushes providers to do these things.
The Azure DNS service is getting improved, from a four nines SLA that has a 25% credit, to a 100% SLA with a 10% service credit. You know, people say all the time that you can’t guarantee 100%. You can’t always absolutely guarantee but then, you don’t have to put a service credit on top of it, right? This is basically what the big change is.
They are saying that if we don’t hit the 100%, then you’re going to get a 10% service credit. There are always a few caveats to what Microsoft considers an outage, however. It’s not like, “They didn’t answer one DNS request from me, I’m going to get 10% right now off my bill” or whatnot.
The conditions are that if you make a valid DNS request and you don’t receive a response within two seconds consecutively for a minute – that is considered an outage. Then you will get a credit based on the minutes that the outage lasted.
Obviously, DNS is not only a very robust technology, but also is supported by a lot of name servers, and a lot of regional availability. DNS is one of those few services that is built like a global mesh. So that’s probably why they feel confident in just upping the SLA to a 100%.
Azure File Sync GA
File Sync is an interesting feature that Microsoft has. I haven’t seen a lot of people use it because I think people are not aware that it exists. It is probably very easy to adopt for most shops. This is even for companies that don’t run production systems on Windows Server. Most companies run their users on Windows in terms of workstations and laptops, right?
I think we can all agree that the vast majority of corporations and enterprises end up with a server somewhere that is the file share server of the company. We all know these file shares become a nightmare as they just grow and grow, and people just use them as, well, I don’t want to call them trash buckets, but they are definitely buckets and everything gets thrown in there. And suddenly the companies are having all these storage costs and they get into this problem where these file shares become mission critical but there is a mix of everything in the file share.
File Sync is an agent that you can peer to a service in Azure and it will do caching and tiering of those file shares to the cloud, and they are exposed through a Windows server file share. It can be exposed through either as an SMB interface and NFS interface or as an FTP interface.
Let’s say the files that don’t get a lot of access get moved off the local storage, they go off to the cloud. The files that are hot stay on the local storage, so it is really fast. If, for example, your local file share crashes, you can simply install the file sync agent on a Windows server and, while it’s downloading the hot files again locally, you can still serve them by going through the cloud copy.
You would get some degraded performance over time but eventually all your hot files would be back on local storage. So, it works also as a high availability or a disaster recovery option for these types of file shares. It is very cost-efficient because you have them in the cloud synced automatically and it uses the cloud lifecycle management features too. The colder the data, the more it will move towards the archive tearing of Blob storage and you would end up paying less and less.
Like I mentioned, I think it has a lot of potential use because I see this problem all the time with these file shares that people use, and it just makes a lot of sense. Why would you not want to pay the cheapest storage that you could find in the world?
This was a summary of the Microsoft Azure topics we discussed during the podcast, Chris also welcomed Greg Baker (Amazon Web Services) who discussed topics related to his expertise.
Listen to the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.