In my previous post in this series, I discussed the different ways of accessing data in Azure Storage Services generally. Today I am going to look at storage accounts in detail.
Profile gravatar of Anil Erduran
Follow me:

Anil Erduran

Anil Erduran is a principal consultant and subject matter expert for Hitachi Data Systems EMEA, based in London, UK. He is also a dual category Microsoft Most Valuable Professional in Cloud and Datacenter Management and Microsoft Azure. Anil can be found on Twitter @anil_erduran.
Profile gravatar of Anil Erduran
Follow me:

Latest posts by Anil Erduran (see all)

Why do I need a storage account? ^

To get started with Azure Storage Services, you need two things:

  • Azure subscription
  • Storage account

An Azure Subscription gives you access to a variety of Azure services. It’s an entry ticket to the game. Once you have an Azure subscription, the next thing you need to do is to create a storage account; this is a secured account that allows you to access different data abstraction tools, such as the Blob, Queue, Table and File services.

As discussed in my previous post, every single object held within the Azure Storage system has a unique endpoint specified with a URL.

Storage account architecture

Storage account architecture

As shown in the graphic above, all access to Azure Storage, both services and data, is done through the storage account.

It’s also important to note that each storage account supports the scalability and performance targets outlined in this article.

Storage account types ^

There are two storage account types available.

Standard storage account

The Standard storage account is Azure’s default offering, and provides the Blob, Table, Queue and File Storage services.

According to the documentation, a Standard storage account offers a maximum IOPS of 500 and allows you to aggregate multiple disks to increase the IOPS if needed.

Premium storage account

Last year Microsoft announced Premium Storage which is designed for high-end applications that require high performance and low-latency SSD disks.

Now you can attach several premium storage SSD disks to a virtual machine that supports up to 64 TB of storage per VM – and your applications can achieve 80,000 IOPS/2000 MB per second disk throughput. As you can feel the difference, I/O intensive applications in the cloud run extremely fast.

There are a few more things to know about a Premium storage account:

  • Only supports page blobs means you can only store VM virtual disks.
  • Supports DS, DSv2 or GS series virtual machines for attaching disks.
  • Inside a Premium storage account, you can provision three types of disk:
Premium Storage disk types

Premium Storage disk types

  • When you provision a premium storage disk, above capacity, IOPS and throughput are guaranteed.
  • Only locally redundant storage replication is available for Premium Storage. So it’s important to copy snapshots to a geo-redundant standard storage account if you need geo-level availability.
  • By default, all premium data disks use a read-only caching policy. For write-heavy or write-only data disks you can consider disabling disk caching.

It’s also possible to migrate your existing VMs from a Standard Account to a Premium Account. This process involves two steps:

  1. Migrate disk (Stop the VM, copy VHD to premium storage account using AzCopy or CopyBlob, create a new OS disk)
  2. Convert VM size (Create DS-DSv2-GS series VM using OS and data disks)

Creating a storage account – ARM or classic model? ^

As you probably heard, when the new Azure Portal was announced at Build 2014 (portal.azure.com), Microsoft also revealed its new deployment model – Azure Resource Manager.

This new model offers the ability to capture your deployments in Azure into a JSON file so that you can re-use it to provide consistent and repeatable deployments for your applications. It’s an API surface area for managing resources in Azure and can be consumed from different endpoints such as PowerShell, Azure CLI and Azure portal. You can also define tags for resources such as Compute, Network and Storage for billing processes.

So in short, it’s a huge change that simplifies deployments in public and private clouds. I’m not going to talk about ARM in detail, but it’s important to understand the concept as there are many differences between the Resource Manager and the classic deployment model when it comes to creating resources such as storage.

In the classic deployment model, first you need to create a cloud service to store all your virtual machines. These virtual machines will be assigned with IP addresses automatically. Then you need to create a storage account to host virtual disks.

Classic deployment model

Classic deployment model

In the ARM model, storage is nothing but a Resource Provider. Resource Providers are resources that you deploy with the Resource Manager and you can work with these resources through REST API operations.

ARM deployment model

ARM deployment model

Therefore, for a typical virtual machine deployment using ARM, you need a specific storage account created in the Storage Resource Provider to store virtual disks in blob storage. You also need to specify a NIC in the Network Resource Provider and optionally an availability set in the Compute Resource Provider.

Create storage account

Create storage account

Thus, it’s important to decide which deployment model you are going to use for your applications before creating storage accounts.

Assuming you are using the new ARM-based deployment model, here are the basic steps to create a storage account:

On the new Azure Portal browse: New –> Data –> Storage –> storage account

Creating storage account options

Creating storage account options

Fill in the fields as per your requirements:

  • Choose RESOURCE MANAGER for the deployment model
  • You can select storage account types here: Standard or Premium
  • Replication options – See my Azure Storage Services introduction for more details
  • Provide a name for your Resource Group.
Storage account settings

Storage account settings

There are some “keys” available on your storage account that you can use to access your data. However, it’s important to note that if you want to offer some kind of delegated and limited access to your data, it’s not the best way to share these storage access keys with others. When you create a storage account, you will see that two 512-bit storage access keys are also generated.

Access keys

Access keys

These keys are used in authentication to access the storage account from your application or cloud services, and I am going to use them in my next post when I explain access to storage services.

There is a reason why Microsoft generates two access keys for each storage account. Your security policy may force you to change access keys on a regular basis. You can use the secondary access key to update the configuration file in your cloud services and then regenerate the primary key. Then you simply provide the primary key in the configuration file and cloud services will be able to access the storage account using the newly generated key.

As you may realize, these keys are used for authentication and shouldn’t be shared with clients or customers. If you want to provide limited access to the data in your storage account for a specified amount time, you need to consider the Shared Access Signature (SAS) model. Using SAS, you can simply create a signature in a URI format that points to your storage resources. Here is a simple SAS URI:

https://mystorageaccount01.blob.core.windows.net/container1/text1.txt?sv=2015-04-05&st=2016-01-29T22%3A18%3A26Z&se=2016-05-30T02%3A23%3A26Z&sr=b&sp=rw&sip=85.5.6.96-85.5.6.97&spr=https&sig= RHIX5RHIX5RHIX5MboXr1P9ZTjEg2tYkboXr1P9ZUXDtkk%3D

That simple URI includes multiple parts like blob URI, start and expiry time, permissions, IP range and signature.

For more information regarding SAS scenarios please take a look at this article.

Now let’s check if we can browse our new storage account using Azure Resource Manager cmdlets.

The AzureRM module includes Resource Manager commands that you can use to access and manage ARM resources from PowerShell. But first you need to import the module and components using the following commands:

AzureRM module

AzureRM module

You can also just use the AzureRM.Storage module name to import storage provider cmdlets. That’s also one of the sexiest parts of the Resource Manager as you can play with individual parts of resources. See below:

Available AzureRM modules

Available AzureRM modules

To start with ARM cmdlets, you need to login to your subscription using the Login-AzureRMAccount cmdlet. The screenshot below demonstrates the usage of a few other cmdlets that can help you to get you started with the ARM storage module:

Figure 11 Basic AzureRM.Storage Commands

Figure 11 Basic AzureRM.Storage Commands

What’s next?

Here I have explained what Azure Storage Services is, the types of storage accounts and replication models offered, and how to create a storage account for the first time.

Now it’s time to dig in deeper and play with different services such as blobs, tables, queues and files.

Take part in our competition and win $100!

Share
0

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

CONTACT US

Please ask IT administration questions in the forum. Any other messages are welcome.

Sending
© 4sysops 2006 - 2017

Log in with your credentials

or    

Forgot your details?

Create Account