Last May, the Microsoft Build 2019 conference took place in Seattle. A number of new features and capabilities were announced and demonstrated.
This week, communities in the Asia Pacific held their own version of Build, bringing the best and the highlights from the conference to the local user communities. They even involved Live Webinars and in-person events. I delivered the updates for SQL here in Singapore, and here are the notable highlights:
Azure SQL Database Serverless
Serverless is a new pricing tier for single Azure SQL Database under the vCore model in the Gen5 compute. It is well-suited for intermittent workloads that can afford a warm period for their resources. While checking this feature, it reminded me a lot of databases with the Auto_Close parameter on. Compared to the other pricing tier, it is priced a bit higher but it is billed per second instead of per hour, and you are only charged for the compute that you actually use which gives you the best value for your money. You can set a minimum and maximum value for your core, 0.5 to 1 and 1 or 4 respectively. You can then set an Auto-Pause delay, the current lowest value is six hours.
Now what will happen is your database will start with the minimum vCore that you set and it will scale up as the workload requires. If you have a new application and you are not certain about the amount of resources it will require, this is a good place to test run it to figure out how much you really need because the scaling is automatic. When there is no activity on your database until the auto-pause delay you have set – activity can be any connection or any configuration changes – your database will be paused. During that time the memory is reclaimed and the CPU is released. From then on you are not charged for compute until a new activity happens on your database.
Something to highlight: since the database is paused and the resources are released, the first connection to your database when it is in the pause state will fail and throw an error, but the succeeding connection will be successful. The first connection is the event that will signal the database to resume. So you’ll need to either issue a resume first before attempting a connection or add a connection retry logic on your application, which is actually a best practice for SQL Azure Database. During my session, someone asked if this will change in the future – resuming the database instead of throwing an error. That I’m not sure.
After resuming it will then start with the minimum resources that you have set and slowly scale up as your workload goes up. I can see a lot of scenarios that will benefit from this – Dev and test environments come to mind, as well as all those third-party applications I’ve seen with the auto_close parameter set to true. For more information, you can check it out more here.
Azure SQL Database Hyperscale
At the other end of the spectrum is something for the VLDB out there: hyperscale. The pain for any large databases is doing maintenance and operational tasks, including adding compute and storage and any backup and restore. This actually blew my mind while checking it out. Hyperscale more or less re-architected how SQL server works with its storage and logs to provide high performance and easier storage operations for databases more than 4 TB in size.
The engine is de-coupled from the storage so that scaling for both compute and storage is faster and backup and restore operation is less painful. The promise is the backup and restore operations will take almost the same time whether your database is 4 TB or 100 TB. In the demo, a 50 TB database was restored to a new DB in less than 8 minutes, which is impressive. Hyperscale gives users a new pricing tier that you can scale out beyond the current limit of any of the offerings, 4 TB to 100 TB and more. In theory, you can go beyond the 100 TB if you have more than that.
The architecture puts your engine in one compute with a lot of local SSD to use for your resilient buffer pool extension (RBEx) where your most recently accessed data will live much like the buffer cache in the traditional SQL Server. This guarantees less than 0.5 ms of data access latency. Then your data files are distributed in 128 GB chunks to what it calls “Page Servers” with remote SSD. It is akin to the SAN storage architecture that we have on-prem where the storage for the SQL Server is in remote SAN boxes.
Those remote SSDs will be used as RBEx to hold your entire 128 GB of data, guaranteeing less than 2.5 ms of data access latency. All the while, the actual data files will reside on an Azure Standard storage attached to the Page servers. This is also how it can do backup and restore operations very fast. The backup and restore operations are done using file snapshots of the data files which are isolated from the compute.
It also changed the way SQL performs its log operation. You can discover and read more about it here. Although with its architecture it is not likely to come to SQL Server product as it is. It is, after all, developed for the cloud. I’m wondering how this will affect the next iteration of SQL Server.
Worth a Mention:
There were a lot of new features announced at Build 2019 so it takes time to digest everything. But buried somewhere there are the following new SQL-related new features that are worth mentioning:
1. Azure SQL Database Edge – a small database engine with time-series and data streaming with ML capabilities for your edge devices.
2. CosmosDB now offers APIs for Spark and etcd.
3. CosmosDB now provides Notebook experience for all APIs
4. You can now choose collation for Azure SQL Database Managed Instance
5. You can now choose timezone for Managed Instance instead of being stuck with UTC
6. Managed Instance now has Public Endpoint that you can grant access to either Azure Services or Internet.
For the entire list of Updates from Build 2019, you can visit this page.
Interested in working with Pio? Schedule a tech call.