- Scalability and availability for Azure Web Apps - Wed, Jun 28 2017
- Managing Azure Virtual Machine scale sets - Wed, May 31 2017
- Azure virtual machine scale sets - Mon, May 8 2017
About the series
Here are the blogs in this series that I’m planning to publish:
- Azure Storage Services — Introduction
- Azure Storage Services — Accessing services
- Azure Storage Services — Storage account
- Azure Storage Services — Blob storage
- Azure Storage Services — Table storage
- Azure Storage Services — File storage
- Azure Storage Services — Queue Storage
- Azure Storage Services — Premium storage benefits
- Azure Storage Services — Security
- Azure Storage Services — Useful tools
A couple of important points for you:
- We will do lots of PowerShell , so I highly recommend you have at least a basic understanding of PowerShell cmdlets. I will try to upload all the codes I developed for the articles so that you can reuse or modify them. Also, I will use Azure CLI, the Azure Resource Manager version of commands, and REST APIs, if possible.
- This is going to be a technical series, so don’t expect too much pricing or nontechnical discussions.
- As you probably heard, Microsoft recommends you use the Resource Manager deployment model as much as possible instead of the classic model. So in these series, I may be showing both options sometimes, but I will try to leverage the resource manager model whenever possible.
What is Azure Storage Services?
Azure Storage Services is designed for specifically Internet-scale applications, that is, for applications that have to store more than 20 trillion object or process millions of requests per second.
Yes, it sounds huge. We call that Internet-scale. These large-scale modern applications handle millions of user requests, which view and update the backend data simultaneously. Data is growing faster and faster, and we are facing new challenges every day.
To be able to support such massive data and application requests, you need datacenters all over the world. Just like Azure has had for years. Microsoft has around 19 different regions worldwide and gives you the opportunity to replicate your data for a second or third region when you want to build really highly available large-scale applications.
In short, Azure Storage is a kind of cloud storage solution for applications that need high availability and scalability.
Microsoft has different automation and storage technologies built into its datacenter to provide low-cost, high-performance cloud storage to its customers. For instance, Microsoft always tries to keep provisioned storage around 70% utilized in production so that it can continue to provide service in the presence of a disk-short stroking or rack failure.
Azure Storage is also the storage foundation for Azure Virtual Machines. Obviously, each virtual machine has system and data disks available, and these disks are stored on Azure Storage Services. There is also SSD-based storage running on the optimized storage platform in Azure to provide consistent high I/O performance and low latency for workloads like Online transaction processing (OLTP) and Big Data. You can use DS series virtual machines to leverage SSD-Disks. These VMs are only available if you have a Premium Storage Account, for which we will cover the details in the next series.
Are there any limits in the cloud?
It’s always recommended to check Azure storage scalability and performance targets before designing underlying cloud storage infrastructure for your applications. Most of the targets documented here are high-end targets but also are achievable if you have large-scale applications. Some important things that need to be considered are:
- You can create up to 100 storage accounts for each Azure Subscription.
- Total size within a storage account cannot exceed 500 TB.
- Maximum request rate is 20 KBps.
- Maximum size of a page blob is 1 TB.
- Maximum size of a table entity is 1 MB.
- Maximum size of a file share is 1 TB.
- Disk size for standard storage accounts is 1023 GB.
There are many more limits for different resource types in the article I mentioned above. Once your application reaches the max limit for a particular resource, you will start getting error code 503 (Server Busy) or error code 500 (Operation Timeout) responses.
There are a couple of ways to overcome the limit problems for your application. The most preferred one is to design your application to leverage multiple storage accounts, as each Azure subscription can handle up to 100 storage accounts and each account can store up to 500 TB of data. So it’s a good practice to build your application to leverage multiple storage accounts and partitions.
I personally haven’t tried yet, but it’s also possible to create a special request to Azure Support if you really need more than 100 storage accounts in a single Azure subscription. If your use case really needs more than the supported storage accounts, the Azure team can give you up to 250 storage accounts in the same subscription.
Can we extend on-premises storage to Azure Storage?
Extending on-premises storage to Azure Storage is also an easy step by leveraging Microsoft’s StorSimple, Azure Backup, or Azure Site Recovery solutions. All these different solutions from Microsoft offer different ways to extend your on-premises storage investment to the cloud services for different use cases.
There are also different sets of options for you to move your data in and out of the Azure Cloud.
The ExpressRoute allows you to create a dedicated connection between the Azure datacenter and the infrastructure on your premises. This lets you enable different use cases, such as migration, replication, moving large amounts of data, or hosting application and database tiers across local and cloud datacenters with low latency and high throughput.
Microsoft also provides an Import/Export service that lets you transfer large amounts of data to Azure Blob storage. All you need to do is create an Import/Export job in Azure Portal and then ship physical hard drives to an Azure datacenter. That’s quite useful in situations where your network is not capable of sending large amounts of data.
Some pricing info?
Microsoft bills you for the amount of data you use on your storage account. There are different factors for which you will be billed:
- Amount of data you are storing.
- Replication method you opt-in.
- Read and write operations to Azure Storage.
- Data transferred out of an Azure
For more information regarding pricing, please go here.
Subscribe to 4sysops newsletter!
In my next post, I will discuss the different services and how you can access Azure Storage Services.
what the best way to copy of 30TB of data to the azure cloud??
Hi Abel,
As I discussed in the blog, you can create an Import/Export job in portal and then ship the data on a USB drive to Microsoft.
There are also available tools that allow you to copy data to the blobs directly. Let me know if you want to know which one is best for you f as I will discuss this in the latest part – Azure Storage Services — Useful Tools which is going to published in a couple of weeks.