What services are available? ^
Azure Storage provides services in the form of different abstractions, such as blobs, tables, disks, queues, and files.
Although we have separate blogs for each in this series, let me go through them quickly.
Blobs: Blob storage can store unstructured content, such as documents, images, texts, videos, etc. It uses containers to organize data and offers three blob types — block blobs, append blobs, and page blobs.
Tables: Table storage hosts NoSQL key-attributes, which allows fast access to large quantities of data. We will have a quick look later at NoSQL/SQL technologies and see how table storage service scales data.
Queues: You can host your application components in the cloud, on your local datacenter, or on a mobile device, and you may need to scale these components independently. Queue storage provides a flexible asynchronous messaging solution for that purpose. We will see how to authenticate messages in a queue and send them using a simple HTTP POST request.
Files: Azure also has a cloud-based file share (SMB) solution powered by Azure Storage Services. File Storage is nothing but a simple standard SMB file share. Your applications, cloud services, or virtual machines can access data in this share through the file system APIs. In this series, we will look at the general File Storage concept and use Azure Portal and PowerShell to manage shares.
How can I access data? ^
Azure Storage keeps a single global accessible namespace that leverages DNS to be addressable for storage actions. A namespace is divided into three parts in the form of a URI:
To access the data, you need to browse the DNS host name, which is the AccountName part of the URI. That points the user to the primary datacenter where the data is stored. If your application stores data across different locations, multiple AccountNames come into play.
Partition Name is the part that locates the data. Different data abstractions have different partition name values. For instance, for the blob storage, the full blob name or container name is your cluster name, as seen in the below screenshot, which is taken from my blob containers.
All these different types of storage services can be created within the context of the Azure Storage account, which I will be covering in the third part.
Data resources within the storage accounts are exposed via open and familiar REST APIs. So you can easily manage and operate data using Azure Portal or developer APIs easily from any client capable of using HTTP/HTTPS protocol.
Most of the time, REST is my favorite access method, as there is also a cmdlet Invoke-RestMethod introduced in PowerShell 3.0 that allows you to send HTTP- or HTTPS-based requests to a service endpoint designed with RESTful, which is Azure Storage Services in our case.
Here are some basic HTTP actions that can be used to access resources on the endpoint exposed by RESTful:
When you are dealing with storage operations via REST, and let’s say we want to set blob service properties, we simply need to specify properties documented in MSDN and use the PUT method.
I will have detailed PowerShell scripts leveraging RESTful APIs for different abstraction types in the next posts.
Microsoft also supports client libraries such as .NET, Java, Node.JS, and Android. You can simply go to the Azure download center or GitHub and search for any language specific SDKs and tools for your platform of choice.
There are also a bunch of useful tools (free or paid versions) that can interact with Azure Storage resources and make your life a lot easier if you are not a PowerShell hero. (You should be!)
Tools like Azure Storage Explorer supports Windows, Mac OS, and Linux clients and allows you to view and interact with storage resources.
Another tool Azure Explorer helps you manage all your blobs in one place and take actions like searching and filtering blobs.
There is a useful single webpage that lists most of the options available as of today: http://storagetools.azurewebsites.net/
You also can add your own project here by sending a pull request on the GitHub page.
Should I store critical data in a single datacenter? ^
As we mentioned a couple of times, large-scale applications need proper design and high-availability options. As an underlying foundation for most of the services in Azure, Storage Services also tries to meet Service Level Agreement (SLA) for storage in case of any disaster.
In order to ensure durability/high availability and also to meet SLAs, you are presented with available replication options when you want to create a storage account for the first time.
Locally redundant storage (LRS)
This is the most basic replication option for your data. It simply replicates everything within the same region. Every change or request made to your data is stored as three replicas in a separate fault domains (FD) and upgrade domains (UD) within the same region.
For more information regarding FD and UD concepts, please go here.
That option is the cheapest one compared to other replication types and provides replication and protection for your data within a single region.
Zone-redundant storage (ZRS)
ZRS provides more flexibility to replicate your data to multiple datacenters, either within the same region or across two regions. This option simply provides high availability in case of facility disasters.
Geo-redundant storage (GRS)
As the name implies, GRS provides high availability in case of a complete regional outage, as all your data is committed to the primary region first (also replicated three times by default) and then replicated to a secondary region (replicated three times again). It’s important to check Microsoft’s primary and secondary region pairing lists, as once you select the primary region, the secondary will be assigned automatically and cannot be changed.
Read-access geo-redundant storage (RA-GRS)
The latest replication option is a kind of extension to the GRS method. In addition to replication to the secondary region, RA-GRS also provides read-only access to the data in the secondary location. So if a disaster occurs and your application cannot read data from the primary region, your data will be available to be read by your application in the secondary location. All you need to do is develop your application to failover to the second global namespace. The read-only namespace includes the secondary appendix.
Can we start to play now? ^
In order to start to play with storage services, you need an active Azure subscription and a storage account created. All operations to storage services and data take place through the storage account. Each account is associated with containers, tables, queues, file shares and then actual data stored in these abstraction types.
So bear with me for the next blog, in which we are going to cover storage account details and types.