In this series, I will discuss in detail one of the fundamental parts of Azure — Azure Storage Services. In this first post, I will try to set the tone and provide an overview of Azure Storage Services.
Follow me:
Latest posts by Anil Erduran (see all)

About the series

Here are the blogs in this series that I’m planning to publish:

  1. Azure Storage Services — Introduction
  2. Azure Storage Services — Accessing services
  3. Azure Storage Services — Storage account
  4. Azure Storage Services — Blob storage
  5. Azure Storage Services — Table storage
  6. Azure Storage Services — File storage
  7. Azure Storage Services — Queue Storage
  8. Azure Storage Services — Premium storage benefits
  9. Azure Storage Services — Security
  10. Azure Storage Services — Useful tools

A couple of important points for you:

  • We will do lots of PowerShell , so I highly recommend you have at least a basic understanding of PowerShell cmdlets. I will try to upload all the codes I developed for the articles so that you can reuse or modify them. Also, I will use Azure CLI, the Azure Resource Manager version of commands, and REST APIs, if possible.
  • This is going to be a technical series, so don’t expect too much pricing or nontechnical discussions.
  • As you probably heard, Microsoft recommends you use the Resource Manager deployment model as much as possible instead of the classic model. So in these series, I may be showing both options sometimes, but I will try to leverage the resource manager model whenever possible.

What is Azure Storage Services?

Azure Storage Services is designed for specifically Internet-scale applications, that is, for applications that have to store more than 20 trillion object or process millions of requests per second.

Yes, it sounds huge. We call that Internet-scale. These large-scale modern applications handle millions of user requests, which view and update the backend data simultaneously. Data is growing faster and faster, and we are facing new challenges every day.

To be able to support such massive data and application requests, you need datacenters all over the world. Just like Azure has had for years. Microsoft has around 19 different regions worldwide and gives you the opportunity to replicate your data for a second or third region when you want to build really highly available large-scale applications.

Azure regions

Azure regions

In short, Azure Storage is a kind of cloud storage solution for applications that need high availability and scalability.

Microsoft has different automation and storage technologies built into its datacenter to provide low-cost, high-performance cloud storage to its customers. For instance, Microsoft always tries to keep provisioned storage around 70% utilized in production so that it can continue to provide service in the presence of a disk-short stroking or rack failure.

Azure Storage is also the storage foundation for Azure Virtual Machines. Obviously, each virtual machine has system and data disks available, and these disks are stored on Azure Storage Services. There is also SSD-based storage running on the optimized storage platform in Azure to provide consistent high I/O performance and low latency for workloads like Online transaction processing (OLTP) and Big Data. You can use DS series virtual machines to leverage SSD-Disks. These VMs are only available if you have a Premium Storage Account, for which we will cover the details in the next series.

Are there any limits in the cloud?

It’s always recommended to check Azure storage scalability and performance targets before designing underlying cloud storage infrastructure for your applications. Most of the targets documented here are high-end targets but also are achievable if you have large-scale applications. Some important things that need to be considered are:

  • You can create up to 100 storage accounts for each Azure Subscription.
  • Total size within a storage account cannot exceed 500 TB.
  • Maximum request rate is 20 KBps.
  • Maximum size of a page blob is 1 TB.
  • Maximum size of a table entity is 1 MB.
  • Maximum size of a file share is 1 TB.
  • Disk size for standard storage accounts is 1023 GB.

There are many more limits for different resource types in the article I mentioned above. Once your application reaches the max limit for a particular resource, you will start getting error code 503 (Server Busy) or error code 500 (Operation Timeout) responses.

There are a couple of ways to overcome the limit problems for your application. The most preferred one is to design your application to leverage multiple storage accounts, as each Azure subscription can handle up to 100 storage accounts and each account can store up to 500 TB of data. So it’s a good practice to build your application to leverage multiple storage accounts and partitions.

I personally haven’t tried yet, but it’s also possible to create a special request to Azure Support if you really need more than 100 storage accounts in a single Azure subscription. If your use case really needs more than the supported storage accounts, the Azure team can give you up to 250 storage accounts in the same subscription.

Can we extend on-premises storage to Azure Storage?

Extending on-premises storage to Azure Storage is also an easy step by leveraging Microsoft’s StorSimple, Azure Backup, or Azure Site Recovery solutions. All these different solutions from Microsoft offer different ways to extend your on-premises storage investment to the cloud services for different use cases.

There are also different sets of options for you to move your data in and out of the Azure Cloud.

The ExpressRoute allows you to create a dedicated connection between the Azure datacenter and the infrastructure on your premises. This lets you enable different use cases, such as migration, replication, moving large amounts of data, or hosting application and database tiers across local and cloud datacenters with low latency and high throughput.

Microsoft also provides an Import/Export service that lets you transfer large amounts of data to Azure Blob storage. All you need to do is create an Import/Export job in Azure Portal and then ship physical hard drives to an Azure datacenter. That’s quite useful in situations where your network is not capable of sending large amounts of data.

Some pricing info?

Microsoft bills you for the amount of data you use on your storage account. There are different factors for which you will be billed:

  • Amount of data you are storing.
  • Replication method you opt-in.
  • Read and write operations to Azure Storage.
  • Data transferred out of an Azure

For more information regarding pricing, please go here.

Subscribe to 4sysops newsletter!

In my next post, I will discuss the different services and how you can access Azure Storage Services.

  1. Abel 7 years ago

    what the best way to copy of 30TB of data to the azure cloud??

  2. Author
    Anil Erduran (Rank 2) 7 years ago

    Hi Abel,

    As I discussed in the blog, you can create an Import/Export job in portal and then ship the data on a USB drive to Microsoft.

    There are also available tools that allow you to copy data to the blobs directly. Let me know if you want to know which one is  best for you f as I will discuss this in the latest part – Azure Storage Services — Useful Tools which is going to published in a couple of weeks.

Leave a reply

Please enclose code in pre tags

Your email address will not be published.


© 4sysops 2006 - 2023


Please ask IT administration questions in the forums. Any other messages are welcome.


Log in with your credentials


Forgot your details?

Create Account