- Azure Sentinel—A real-world example - Tue, Oct 12 2021
- Deploying Windows Hello for Business - Wed, Aug 4 2021
- Azure Purview: Data governance for on-premises, multicloud, and SaaS data - Wed, Feb 17 2021
Storage is arguably (right behind security) one of the biggest challenges for IT today. The amount of data stored is growing exponentially in both small and large businesses and managing the growth and protecting data is challenging.
All data storage isn’t created equal however and you need archive, backup, primary workloads and remote replication storage. The old solution of buying another SAN (or NAS) just doesn’t scale well and having different types of storage solution creates management overhead and complexity. Another challenge is that most of the data you store isn’t actually used, only a small amount is in current use. Microsoft's StorSimple can provide a solution for these problems.
The basis of all StorSimple solutions is tiered storage. Just like your favorite SAN there’s SSD and HDD storage and the appliance automatically moves hot data to the SSD tier and colder data to the HDD tier. But StorSimple adds a third tier in the form of Azure storage and sends colder data there.
This isn’t just based on last time of access for each data block, there are some very smart algorithms looking at other clues as to which data is best to move to the cloud. Retrieval of data from the cloud is transparent to end users who only see a delay in accessing data that hasn’t been touched for a while.
There are two current models, the 8100 has between 15 and 40 TB of storage (depending on compression) of which 800 GB is SSD storage with a total capacity of 200 TB. The 8600 has between 40 and 100 TB of storage, with 2 TB SSD storage and a total capacity of 500 TB. Both models have 10 Gb Ethernet connections. All the usual SAN trimmings are present with multipath I/O, dual controllers, dual cooling and dual power supplies.
Note that you can’t add more storage after purchase, it’s a sealed box. The arrays present as iSCSI targets, deduplication is built in, all data leaving the array to go to Azure is encrypted, both on the network and when stored in Azure, with a key only known to you.
Management of arrays are done through the StorSimple Manager, housed in the Azure classic portal, which allows you to manage multiple arrays from a single console. Backup is built in and automatically replicates snapshots to Azure. You can pin a volume to the array and its data will never be tiered to the cloud (except the backups of course). For self-service file recovery, there’s a backup folder in a file share, letting users recover files from the last five snapshots.
There are some workloads where StorSimple arrays are not recommended, such as for high performance VMs with large storage I/O and until recently, as a backup target. At Ignite 2016 Microsoft showed Backup Exec being used to back up to a StorSimple array and they provide sizing guidance for several third-party backup applications, making backup targets a valid usage for StorSimple.
The two arrays are eminently suitable to provide (nearly) bottomless storage for your datacenter. In branch offices or remote locations however a SAN array would be overkill and so earlier in 2016 Microsoft responded to user requests by providing virtual arrays.
There are two flavors here. The StorSimple Cloud appliance 8010 and 8020 models are virtual arrays that run in Azure. These are useful for accessing snapshots from on-premises arrays for Disaster Recovery testing or providing data for applications in the cloud.
And for branch offices, there’s the new StorSimple Virtual Array 1200, a Hyper-V or VMware VM that acts as a iSCSI or File share target, whilst providing the same management and tiering benefits as the physical arrays.
The VM appliance has a limit of 6.4 TB usage storage locally (if you provide it with the maximum of 8 TB of underlying storage) and a total of 64 TB including cloud storage. You’ll want to give the appliance at least 8 GB of memory (don’t use dynamic memory) and four virtual CPUs.
Just as the physical arrays can’t be upgraded with more storage, a virtual array cannot be expanded. And just like the 8000 series, the data is de-duplicated and encrypted before being uploaded to the cloud. If you have a truly dispersed workforce there’s a third party service from Talon called CloudFAST™ which simplifies management.
StorSimple is a very cool technology with a sound underlying architecture. It solves a lot of problems with storage by combining different types of storage usage in one device and certainly backing up and tiering cold data transparently to the cloud is very useful.
Microsoft recently announced a very interesting addition, allowing you to use your data in Azure for media services (streaming video / audio), machine learning or big data analytics through StorSimple Data Transformation. For example, if you store videos you can use Azure Media Services to automatically run facial recognition and redaction, if you’re a call center you can convert recorded conversations to text and run sentiment analysis.
On the other hand, it’s very enterprise focused. I haven’t found exact pricing but the 8200 series starts around $ 100,000 and 8600 at $ 170,000. The Virtual Array is only available if you purchase Azure under an Enterprise Agreement (it’s not even available for testing / evaluation). And the 8000 series only uploads data to Azure, not AWS or Google.
Subscribe to 4sysops newsletter!
Because of these limitations and the high cost it’s hard to recommend StorSimple as a general solution, although if your business has already picked Azure and have the right workloads, it’s a good answer.