Not too long ago, systems administrators had no way to optimize their servers' SSDs. In this review, you'll learn how Condusiv solved this problem with their new product, SSDkeeper.
Latest posts by Timothy Warner (see all)

If you're an experienced Windows systems administrator, you know your servers' mechanical hard disk drives (HDDs) require periodic defragmentation to keep them running efficiently. Condusiv (formerly Diskeeper) has been in the defragmentation game for almost as long as I've been in the industry (quite a while). Read their defragmentation white paper for background information on how HDD defragmentation occurs.

You might already know that Windows cannot write directly to the sectors on an HDD platter. Instead, we have the NTFS abstraction layer, which typically involves 4-KB clusters as each volume's fundamental allocation unit.

Over time, as you create, edit, and delete files, the clusters that make up each file become spread across the volume, resulting in fragmentation and decreased input/output performance.

More recently, the proliferation of high-speed solid-state drives (SSDs) has led many systems administrators to make two assumptions, one of which is erroneous:

  • That SSDs do not become fragmented
  • That running traditional defragmentation on an SSD reduces the drive's lifetime

The first statement is simply false. SSDs do indeed produce a type of fragmentation even though an SSD has no moving, mechanical parts. This is, once again, because Windows uses the NTFS cluster map on SSDs just as it does with HDDs.

The second statement, however, is indeed true. Once again, read Condusiv's literature for the explanation for this, but let me sum it up in layman's terms. SSDs have a predetermined lifetime in terms of how many write operations the drives can experience before becoming unstable.

The SSD "Achilles' heel" is what Condusiv calls "death by 1000 cuts." In other words, when you write a new file to an SSD, the drive performs far more than a single write operation to make room for the new data. This is called "write amplification," and it's a really bad deal for SSD drives.

Today I will introduce you to Condusiv's new SSDkeeper product. I've been a Diskeeper user for many, many years, and I'm happy that Condusiv developed a safe method for optimizing SSDs. Let's learn more about the tool now.

Installation and configuration

You can request a free 30-day demo of any of three SSDkeeper stock-keeping units (SKUs):

  • Home: Personal use only; runs on Windows Client only
  • Professional: Can be centrally managed; runs on Windows Client only
  • Server: Can be centrally managed; runs on Windows Server

The deployment package is typical and fits easily into your existing software deployment toolset. Besides the SSDkeeper desktop application, the software installs itself as an autostart service.

The "runs as a service" bit is actually very important because SSDkeeper was written to run in the background and optimize SSD read/write operations on the fly. Condusiv has three core SSDkeeper services at play here:

  • IntelliMemory: SSDkeeper uses some of the host's physical memory to cache SSD read requests
  • IntelliWrite: Reduces write amplification by optimizing SSD data allocation
  • File Performance Optimization: This speaks to the fact that SSDkeeper runs in the background and performs dynamic read/write optimization with minimal system impact

Let me walk you through the SSDkeeper user interface in the next figure:

SSDKeeper user interface

SSDKeeper user interface

  • A: Enable, disable, or otherwise tweak the IntelliMemory, IntelliWrite, and File Performance Optimization features
  • B: Generate monitoring reports
  • C: Check for product updates; file a support ticket
  • D: Analyze product performance benefits in many different ways

Honestly, besides access to the settings and reports generator, the SSDkeeper user interface is largely about showing off the storage I/O time and operations saved over time. If nothing else, this data gives you satisfaction and justification in having purchased the product.

Because SSDkeeper performs real-time monitoring and optimization, this is largely a "set it and forget it" product. However, you can open SSDkeeper and go to Actions > Perform Manual Operations to run ad-hoc maintenance on your SSDs; I show you the interface here:

Running SSDkeeper manual operations

Running SSDkeeper manual operations

Windows Server-specific features

As I mentioned earlier, you need SSDkeeper Professional Edition to gain common enterprise business features. For starters, only the Server Edition runs on Windows Server. The Professional Edition also works with storage area network (SAN) volumes and Windows Server failover clustering.

It's a separate purchase, but I strongly suggest you consider buying a license for Diskeeper 16 Administrator if you plan to deploy SSDkeeper across your server farm. Diskeeper 16 Administrator is a SQL Server (or SQL Express)-based desktop application that centralizes management of Condusiv's three defragmentation products:

  • Diskeeper: for HDD drives
  • SSDkeeper: for SSD drives
  • V-locity: for VM virtual hard drives

I show you the Diskeeper Administrator console interface in the following screenshot:

Diskeeper Administrator console

Diskeeper Administrator console

You can undertake several valuable administrative actions with Diskeeper Administrator, including:

  • Policy-based software deployment and service configuration
  • Remote access of managed nodes
  • Advanced reporting and e-mail alerts

Conclusion

Okay, let's wrap up—I know I threw a lot of information at you in this product review. SSDkeeper Professional, Server, and Home editions are available as single-user (technically operating-system) or volume licenses. Condusiv also offers discounted licensing for academic use.

In my experience, an enterprise-class defragmenter is a "no–brainer" for systems administrators whose livelihood depends on optimizing server performance. After all, storage is one of the four key server subsystems (the others being processor, memory, and network).

The fact that Condusiv offers defragmentation and optimization for both HDD and SSD disks is great, and layering a centralized management console on top of their client software is another plus.

Subscribe to 4sysops newsletter!

My only small complaint is that I wish Diskeeper and SSDkeeper were not separate products. After all, I maintain many SSD/HDD hybrid systems, and it is a bit of a pain to have two client applications to manage instead of only one.

7 Comments
  1. Leos Marek (Rank 4) 6 years ago

    Hello Timothy,

    your absolutely right, SSDs get fragmented and the de-frag makes it only harder.

    But honestly – the fragmentation was main problem for HDDs when the head spindle took extra time to get the fragmented file. With SSDs, the time-delay is minimum/almost 0?

    Despite the fact that majority of environment is virtual and the fragmentation becomes even less problem as the virtual disk is not usually spanned thru whole physical RAID array.

    You say that Condusiv is taking the data on the fly for writes. Not all data going to disk has to be in RAM. That means they need to have some driver inside/above Windows kernel that catches all the data (similar to Double-Take Availability HA product) and then “cache” this data to save them to disk in better shape. Aka the product need to cache the write data before it can be written to the disk with optimized order. This for sure has to come with some system load.

    So in the overall what is the benefit in terms of performance, savings on SSDs replacements and the product cost?

    Cheers Leos

    • Leos, it is a common misconception that SSDs don’t require defragmentation. Although there is no head that has to be repositioned as in an HDD, the seek time is only a part of the overall I/O.

      Here are some explanations:

    • GQ 6 years ago

      Hi Leos –

      I am the SVP of Technology Strategy here at Condusiv. Great question and you are correct that if Condusiv was caching writes to optimize the write order, there would be some system load for this. But Condusiv takes a more efficient and simpler approach.

      First of all, the NTFS file system takes a one size fits all approach when files are created or extended. It looks for a fixed free space allocation as it does not know how big that file creation or extension is going to be. Now, if the creation/extension is larger than that allocation, it will have to find another allocation, which means another I/O, and so on. You can see how this can result in a lot of split or extra I/Os.

      Condusiv is monitoring the system in the background, seeing when files (types and what applications) are being created/extended and what sizes these are. Basically, Condusiv is feeding this intelligence back to the NTFS file system, so it can make a more intelligent allocation to contain all the data for the file creation/extension, thus preventing the need for extra allocations and the extra I/Os associated with it. A much more efficient approach.

      Best, GQ

  2. Leos Marek (Rank 4) 6 years ago

    Hi Michael,

    I didnt question that fact. Only what is the overall bonus of such product. I work mainly with virtual servers and databases where the fragment problem is minimum.

    Nice articles tho, I will definitely read thru.

    Thanks

  3. Leos Marek (Rank 4) 6 years ago

    havent said it solves it but minimizes/reduces.

    Database – large (100MB/1GB) pre-allocated files. Clearly there is no fragmentation, is it?

    Virtual/RAID – If I take example of 3*600GB disks in RAID5 and I have a VM with 90GB space, then its distributed 30GB/30GB/30GB on each disk, so in any case, the head does not go thru whole disk, but only minimum part. Despite VMFS has 1MB block size.

    Of course it does not solve NTFS fragmentation, but the impact is much smaller than on normal physical box.

    Or is this assumption wrong?

    • Yes, I suppose if you work with pre-allocation, fragmentation is not such an issue. However, you might then waste space because it is often impossible to predict how fast a database grows. The same applies to pre-allocated virtual disk files. In today’s cloud/elastic environments, you want to have dynamic growth. In addition, you want to be able to quickly move databases to new VMs or a virtual disk to another host. I suppose large pre-allocated files are not really helpful here. The point is if you have a modern dynamic defragmentation solution, you might not have to work with pre-allocation.

      As to your argument about RAID, we are talking about SSDs here, so there is no head that has to go through disks. If you distribute your data to multiple disks, you can improve your I/O, but files are still fragmented.

Leave a reply

Your email address will not be published. Required fields are marked *

*

© 4sysops 2006 - 2023

CONTACT US

Please ask IT administration questions in the forums. Any other messages are welcome.

Sending

Log in with your credentials

or    

Forgot your details?

Create Account