We are coming to the end of our article series on Azure Storage Services. In this section, we are going to look at the details of Azure Premium Storage.
Follow me:

Anil Erduran

Anil Erduran is a principal consultant and subject matter expert for Hitachi Data Systems EMEA, based in London, UK. He is also a dual category Microsoft Most Valuable Professional in Cloud and Datacenter Management and Microsoft Azure. Anil can be found on Twitter @anil_erduran.
Follow me:

Latest posts by Anil Erduran (see all)

On April 16, 2015, the Microsoft Azure portal started to offer two performance options when you create a new storage account: Standard and Premium.

Azure portal performance options

Azure portal performance options

The Standard storage performance option is designed to store your data on traditional HDDs and also allows you to use different Azure storage services such as tables, blobs, files, and Azure VM disks. This package is the default option and it can be used for many aspects of application development.

The Premium Storage option, on the other hand, brings high-performance, low-latency SSD disk support to Azure virtual machines that run IO extensive workloads.

Use cases ^

The Premium Storage option is designed to meet your high performance expectations. Therefore, any resource in your datacenter that demands a VM workload might be a good fit for Azure VMs, which use Premium Storage disks. Exchange Server, SAP suites, and databases such as SQL, MySQL, and Oracle are some of the workloads that require high performance and low latency in order to run properly; they are also good candidates for Azure VMs powered by Premium SSD disks.

Premium Storage is available only for storing data on disks that are used by Azure Virtual machines and are backed up by page blobs in Azure Storage. For other storage requirements such as block blob, tables, or queues, the Standard storage option should be considered.

The Premium performance option is not available for Blob storage (a new type of account specifically for storing blob data, which allows you to choose from two tiers: cool or hot access).

Premium option does not support Blob Storage

Premium option does not support Blob Storage

Supported Azure VM types and limits ^

High performance and low latency SSD disks are available only as part of the Premium Storage account and can be attached only to DS, DSv2, and GS series virtual machines.

Currently, there are three sizes of Premium disk that can be attached to these VMs. Each disk size also has its own performance specifications. The following table shows the IOPS and throughput details for the different disk sizes:

Disk TypeP10P20P30
Size128 GB512 GB1024 GB
IOPS per disk50023005000
Throughput per disk100 MB/s150 MB/s200 MB/s

If your application requirement is 30000 IOPS, then you can combine 6 * P30 disks or 13 * P20 disks. You will also need to select the correct VM type to make sure the IOPS, throughput, and the maximum number of data disks you can attach are within the scale limits for the size of the VM in question. You should also consider whether the processing and memory capacities of the chosen VM are large enough for your application.

To look at it in more detail, let’s say your application IOPS requirement is 10000 IOPS. You could achieve this by using 2 * P30 disks. So, the Standard_DS2 VM would seem a good fit in this example. It also supports the addition of up to 4 different data disks.

SizeStandard_DS2
CPU cores2
Memory7
NICs (Max.)2
Max. disk sizeLocal SSD disk = 14 GB
Max. data disks (1023 GB each)4
Cache size (GB)86
Max. disk IOPS/bandwidth6,400/64 MB per second
Max. network bandwidthhigh

This VM size is also limited to providing a maximum of 6,400 IOPS, however, so your application would not be able to leverage the maximum IOPS with two P30 disks. In this case, the Standard_DS3 VM size would be a better choice.

In other words, you should check every aspect of the chosen VM type to see if it meets your requirements.

Replication options ^

You can only select the Locally Redundant Storage (LRS) replication option for Premium storage resources, which means that every request made against your data will be replicated three times within the same region. For more information regarding replication options, please read my article about accessing Azure storage services.

This maybe enough for some of your workloads, but you may also want to leverage Azure’s distributed datacenter infrastructure to ensure that your application has durability and high availability. In this case, you’ll have to get your hands a bit dirty.

You will need to create snapshots using the Snapshot Blob REST API call and then use a copy operation to copy the snapshot to a geo-redundant storage account. You can leverage the AzCopy tool or Copy Blob REST API for the copy operation.

Performance testing on Premium disks ^

Let's do some performance tests using IOMeter and Diskspd on my Azure premium disks. As I mentioned before, in order to start using premium disks, you have to have a Premium storage account. Creating a Premium storage account is a straightforward process. You can either go to the Azure portal and select the Premium performance option or you can use the “Premium_LRS” storage type parameter in PowerShell.

Here’s the Azure portal Premium option:

Creating a Premium Storage account

Creating a Premium Storage account

If you have followed this blog series from the beginning, I’ll assume you already have Azure PowerShell installed. If so, to create a Premium Storage account with PowerShell, you just need to specify “Premium_LRS” as the account type.

Creating a Premium Storage account with PowerShell

Creating a Premium Storage account with PowerShell

Now you can attach a Premium SSD disk to a new or existing DS, DSv2, or GS series virtual machine.

I already have a DS14_V2 series VM that I created before, which can provide a maximum of 50000 IOPS and allows you to attach 32 data disks. This size of VM has also been used by Microsoft to provide some benchmarking results.

DS14_V2 specifications

DS14_V2 specifications

I’m adding several SSD disks to my current DS14_V2 machine using the Add-AzureRmDataDisk command and a simple loop, as below:

This command block logs in to my Azure subscription and then creates a bunch of empty data disks, before attaches them to my VM.

One important point here is the size of the disks. As mentioned before, there are only three different size options for the Premium Storage account (P10- P20- P30) Therefore, if you specify a size other than the pre-defined sizes, Azure will update your disk size to match these three options, without changing its actual size.

For instance, I created 11 data disks, each with a size of 700 GB. As their size is greater than 512GB, these disks fall into the P30 category and can now provide 5000 IOPS. The actual size (700GB) remains the same, however.

Matching disk sizes

Matching disk sizes

I wanted to add 11 disks because 10 disks should give the maximum IOPS supported by DS14 VM (50000) and I can use the extra disk to see the benefits of the cache options.

The data disks added

The data disks added

The Storage Spaces area is a good choice for stripping disks together in order receive the maximum IOPS. I created a Storage Pool and added all my disks to it, creating a volume so that I can run performance tests on it.

Microsoft recommends using PowerShell if you need to attach more than eight disks to create a volume, as the Server Manager UI only allows you to set a total number of columns of up to eight for a stripe volume. With PowerShell, you can set the number of columns to be equal to the number of disks.

Storage Pool overview

Storage Pool overview

Running tests ^

There are two main tools you can use to run performance tests on your disk configuration: IOMeter and Diskspd.

For IOMeter, you can create access specifications for the different test scenarios you want to run across your disks. I have created two access specifications with following settings:

Access SpecificationRequest sizeRandom %Read %
RandomWrites_8K8K1000
RandomReads_8K8K100100
IOMeter access specifications

IOMeter access specifications

I also used the TestFileCreator.exe tool to create a 200GB test file for target volumes. If you create any file with the name of iobw.tst, IOMeter will use that file to run read/write tests.

You can also use the Diskspd.exe tool in order to run similar tests. I suggest that you use 8KB of IO size and a high queue depth of 128 for each test.

You may also need to warm up your cache disks beforehand in order to get the maximum IOPS out of ReadOnly host caching. You can use IOMeter to initialize (around 2 hours) and warm up (around 2 hours) cache disks.

After a couple of tests, I managed to reach the maximum supported IOPS for my DS14 VM:

Performance monitor - Disk read

Performance monitor - Disk read

Diskspd - Read IOPS

Diskspd - Read IOPS

You can also enable cache options for some of your data disks. It’s important to check if your application is capable of handling caching. For data disks, Azure allows you to set the following cache options:

None: no caching

ReadOnly: if you need low read latency and high read IOPS, the READONLY cache setting is a good choice. Here, reads will be performed on VM memory and local SSDs.

ReadWrite: this is a caching option for read/write operations. As this option also provides write caching, you should check if your application is capable of writing cached data to the persistent storage. If not, you may experience some negative consequences, like data loss, in the case of a VM crash.

I used following script to change the cache settings for all my data disks:

If I enable ReadOnly caching, I see an impressive increase in IOPS: the average is 65251 (having been 51012 before).

Diskspd - Read IOPS with ReadOnly cache enabled

Diskspd - Read IOPS with ReadOnly cache enabled

If I enable the ReadWrite cache, I can see similar performance improvement on read (100% read), but not so much for write (100% write).

Diskspd - Read IOPS with ReadWrite cache enabled

Diskspd - Read IOPS with ReadWrite cache enabled

Diskspd - Write IOPS with ReadWrite cache enabled

Diskspd - Write IOPS with ReadWrite cache enabled

It’s more balanced for 50% write and 50% read in a single test. I see a total of 64K IOPS which exceeds maximum IOPS limit of DS14.

Diskspd - IOPS for read and write

Diskspd - IOPS for read and write

In summary, enabling cache options may give you a boost in terms of IOPS – especially for read performance.

Overall, your choice really depends on your application requirements. If your needs dictate it, you can select Premium storage instead of Standard storage. I would recommend that you run tests like the ones described above, using IOMeter and Diskspd, in order to measure your application requirements and set the best caching options for your disks.

In some cases, using Premium Storage may reduce your overall operation costs, when compared with Standard Storage, as you can achieve the required IOPS via smaller VM sizes and fewer disks.

In the next post, we are going to look at the security options available with the Azure Storage service.

Win the monthly 4sysops member prize for IT pros

Share
0

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

CONTACT US

Please ask IT administration questions in the forum. Any other messages are welcome.

Sending
© 4sysops 2006 - 2017

Log in with your credentials

or    

Forgot your details?

Create Account