Microsoft had announced the limited preview of Azure Shared disks. With these announcement it will be possible to migrate clustered environments running Windows Server to Azure. This capability is designed to support SQL Server, Scale-Out File servers, RDS User Profile Disk and SAP ASCS/SCS servers running on Windows. Also Linux-based clustered file systems like GFS2 are supported.
The diagram above shows a 2 node cluster with a single shared disk. Just one node will receive write access, the other node will only receive read access. In case Azure Virtual Machine 1 goes down, write access will be transferred to Azure Virtual Machine 2. This scenario can be extended to more than 2 machines, but multiple shared disks can be attached as well, making it ideal for running parallel jobs or other multi machine tasks.
Azure Shared Disks are only available on Premium SSD disks and only greater than P15 (256GiB) Microsoft has announced that Azure Ultra disk support will be released soon. The number of nodes that can be attached to a disk needs to be preset before mounting the disk to any node. Each disk type has its only limitation. The IOPS limit and bandwidth limit are not affected by this number. I would recommend to set this value has high as possible when deploying. In case a shared disk needs resizing to expand the number of nodes, it is required to un-mount the disk from all nodes.
Microsoft has announced SSD bursting capabilities. This means that Premium SSD disks can achieve higher peak loads than the maximum IOPS with a new maximum of 3500 IOPS and a bandwidth up to 170 MiB/s. Together with this announcement Microsoft also announced new disk sizes (4, 8 & 16 GiB)
With the new bursting disks you can achieve up to 30 times the provisioned bandwidth, which will give better performance for spiky workloads. Disk bursting is based on a credit system. You will receive bursting credits when traffic is below the provisioned limit. Let me try to explain it using a simple chart.
For very high demanding workloads, storage wise, Azure has released Ultra Disk performance tier for production use. I’ve already written about it in a previous post ( Slow IOPS in Azure VM’s? not anymore!) But now is the time to take a deeper look.
Which disk types do we have in Azure?
In the following table you can see what the difference is between all disk types in Azure. This table should help you to decide which disk to use for specific workloads.
— UPDATE 31-12-2019 — New disk sizes P1-P3 & E1-E3
In Azure there are several ways to implement your VM storage. I get a lot of complaints about slow storage in Azure. In this article I will try to explain why this might be slow, and what you can do about it. There are multiple locations where the limit might be hit. So I will address all in the following topics.
Virtual machine type
The first limitation might be coming from your virtual machine. Each type has its own total IOPS limit. Thus by adding more disk or faster disk than the type and size allows will not make any speed difference in the end. One of the obvious reasons for faster disk performance is to use SSD disks instead of HDD.
But keep in mind, not all virtual machines do support Premium SSD Storage, with an effective limit of 500 IOPS per disk, like in the Av2 series. And then there is host caching, that effects performance as well. A few examples:
Recently I received an comparison from Azure with competitors. In the comparison there was stated that by default Azure provides an SLA of 99.95%. However, this is not entirely correct. By default a single basic virtual machine has no SLA at all!
I hear you thinking, what??? let me explain what the options are. First we need to know a bit more of the setup in Azure. For this explanation I will use West & North Europe. These regions do have Availability zones, but this might not always be the case. In the picture below you can review the Azure regions with their options.
So lets zoom in a bit further. In the picture below we have our 2 regions (West & North Europe). Within Region 1 we have 3 separated buildings, creating 3 availability zones.