Microsoft Azure now also offers virtual machines with SSDs. Amazon Web Services (AWS) EC2 instances have supported SSDs for a few months. I ran a quick disk I/O benchmark test to compare the speed of the SSD offerings of two cloud leaders.
Latest posts by Michael Pietroforte (see all)

All the virtual machines of the new D-Series in Azure have SSDs. According to Microsoft’s announcement, these are the new virtual machine types.

Azure SSD VM sizes and pricing

General Purpose

Name vCores Memory (GB) Local SSD (GB)
Standard_D1 1 3.5 50
Standard_D2 2 7 100
Standard_D3 4 14 200
Standard_D4 8 28 400

High Memory

Name vCores Memory (GB) Local SSD (GB)
Standard_D11 2 14 100
Standard_D12 4 28 200
Standard_D13 8 56 400
Standard_D14 16 112 800

We have to distinguish here between remote and local storage. The new virtual machines only support local SSDs (drive D:). Notice that the system drive (drive C:) doesn’t use the SSDs. Thus, the boot up speed of D-Series VMs shouldn’t differ from the A-Series VMs.

Microsoft warns against using local storage (drive D:) for storing any personal or application data because data stored on this drive is subject to loss  and offers no redundancy or backup. However, contrary to AWS EC2 ephemeral local storage, the data survives if you temporarily shut down the machine. Nevertheless, using the local SSD in production environments only makes sense if the data you store on them is replicated across multiple VMs.

Notice that Amazon AWS supports local and remote SSDs (EBS volumes). However, the speed of the local and remote SSDs vary significantly (see below).

Prices for the new Azure SSD VMs are difficult to compare because the CPU cores, RAM configuration, and disk sizes of the D-Series VMs differ from those of the A-Series. The D-Series VMs have CPUs that are 60 percent faster and offer up to 112 GB of memory.

Use SSDs in Azure

To launch a virtual machine with SSD support, you simply have to choose one of the VM sizes that starts with a “D.” The VM sizes that begin with an “A” come with local HDDs.

Microsoft Azure - Launch VMs with SSD

Microsoft Azure - Launching VMs with SSD

You can also change the type of an old VM into a D-Series VM. After you stop the VM, you just have to select the VM and then click Configure.

Change old VM to SSD

Change an old VM to SSD

Just in case you are wondering, how you can check whether a virtual machine uses SSD or HDD, the only way I know of is to measure the speed. The guest OS only sees the virtual disk and therefore doesn’t have access to the physical properties of the storage media.

Azure vs. AWS SSD I/O benchmark

For my tests, I used the AS SSD Benchmark tool. Please note that my benchmark tests are by no means complete or exact. Performance tests have to be run over periods of time, and my tests represent only snapshots. Moreover, the tool I used is probably suboptimal for benchmarks in the cloud. I just ran these benchmarks to get a rough idea of the expected I/O improvements of the new SSDs in Azure.

I worked with a D3 VM (4 cores, 14 GB RAM, SSD) and an A5 VM (4 cores, 14 GB RAM, HDD). The screenshot below shows the benchmark of the remote HDD with an A5 VM.

Azure VM - Remote HDD

Azure VM - Remote HDD

“Seq” measures the sequential speed by writing and reading a 1 GB file. The 4 K tests randomly choose 4K blocks. I think data from these 4K tests is not reliable with remote storage. “Acc.time” stands for the access time. Note that we also have to take network latencies into account here.

In comparison, the local SSD is, of course, much faster (sequential read/write).

Azure VM - Local SSD

Azure VM - Local SSD

Let’s see how this compares to the remote SSDs (EBS) in Amazon EC2 instances. I used the m3.xlarge instance type (4 vCPUs, 15 GB RAM).

EC2 instance - Remote SSD

EC2 instance - Remote SSD

As you can see, the sequential speed of the EC2 instance (58 MB/s, 69 MB/s) is not as fast as with Azure’s local SSDs (186 MB/s, 100 MB/s) but is significantly faster than Azure’s remote HDDs (31 MB/s, 5 MB/s). However, the bottleneck with remote SSDs is usually the network speed; thus, it makes more sense to compare the Azure SSD with the local EC2 SSD (instance store) read and write speed:

EC2 instance - Local SSD

EC2 instance - Local SSD

The write speeds of the EC2 (97 MB/s) and Azure (100 MB/s) SSDs are more or less the same. However, Amazon’s SSDs (433 MB/s) appear to be significantly faster than Microsoft’s Azure SSDs (186 MB/s) in reading data.

All in all, I wouldn’t read too much into the details of the data and the overall score because the tests are very crude. I tested in the Azure location South Central US, and in the AWS availability zone us-east-1a with Windows Server 2012 R2.

3 Comments
  1. Rune Larsen 7 years ago

    Storage IOPS and stable latencies are usually much more important than bandwith.
    The access time of the D3 instance (9.8ms) indicates that Azure is not providing an SSD but a normal, cheap harddrive.

    Your local machine and AWS boxes have sub-millisecond access times which is to be expected from a local SSD.

  2. Gerard 7 years ago

    @Rune: the 9.8 ms is for the C drive, which as stated is not an SSD, only the D drive is, which has 0,17 ms read access time. The authors local machine is not even mentioned in this article!

  3. Rachit 6 years ago

    Azure does mess up a few things. Missing SSD for C Drive is one of them. This is also the reason why Database machines of equivalent sizes give better performance on AWS than Azure.

Leave a reply to Gerard Click here to cancel the reply

Please enclose code in pre tags

Your email address will not be published.

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account