Managing hybrid SSD Caching w/ OS NEXUS QuantaStor

SSD caching is an important performance technology that can boost the IOPS performance of many workloads like virtualization, databases, and web servers by an order of magnitude or more.  SSD caching of HDD arrays has gone mainstream with new technologies like LSI’s Nytro MegaRAID which is based on their LSI CacheCade software. Adaptec has a similar technology with their MaxCache technology but I’ve only seen it in a read-cache configuration whereas the LSI Nytro cards use onboard SSD and mirroring so that you have high-performance write-back cache.
Side note, LSI has another software technology for boosting performance of SSD arrays called LSI’s FastPath which can achieve IOPS numbers in the 450K range which is generally only found in the high-end of enterprise storage systems.  That’s for another article..

In my last article we went through the set of steps you’d go through to upgrade your MegaRAID 9260/9280 controller to deploy the advanced hybrid SSD caching features that come with LSI CacheCade Pro 2.0 on a regular Linux server.  In this article I’m going to go over basically the same thing but will be using the QuantaStor NAS+SAN storage appliance software combined with a MegaRAID 9280 w/ CacheCade to manage and configure all the hardware for you.

(Note, the one step that you’ll need to do from the previous article is a firmware upgrade if your card controller card is not up to v12.13.  You can see the level of firmware you’re running by simply clicking on your hardware controller within QuantaStor Manager and then looking at the properties page on the right side of the screen.)

The images above illustrate using the ‘Create SSD Cache..‘ dialog to create a new SSD Cache device and then how to enable caching on an existing RAID Array.  In the current version of LSI CacheCade you’re limited to a 512GB SSD cache and up to 32 devices in each CacheCade pool.  In this example I’ve got four 128GB SSD drives so usable cache space after creating a 4 drive (RAID10) mirror is roughly 256GB of cache.   As such you could use four 256GB drives or two 512GB SSD drives for your configuration in RAID1.  As shown in the images, once you’ve created the CacheCade device you can then just right-click on the hardware RAID units that you want to active SSD Caching for and choose ‘Enable SSD Caching..”.   That’s all there is to it, no CLI commands, no drivers to build, just install QuantaStor and go.

Note that it’s important to use high grade SSD drives (consult the HCL from LSI), and you should always use mirroring (RAID1/10) as without this you don’t get write caching.  Using multiple SSD drives will also give you a notable additional boost in performance especially when you go up to 4x SSD drives versus just 2x so better to use four 128GB drives than two 256GB drives if you have room in your enclosure for them.  There’s a really great technical brief written up by LSI here and here is a really great chart showing how the performance boost increases as you add more devices which makes perfect sense.

We’re big fans of hardware RAID at OS NEXUS because high-performance technologies like SSD caching are made highly efficient when you have hardware that is in charge of managing and optimizing the interplay between the NVRAM, SSD, and the HDD RAID array like LSI can do and has achieved with CacheCade.

Last, the current QuantaStor release available on our web site available here, and all QuantaStor versions from v2.5 and newer have integrated CacheCade management support.
Best,

-Steve



Categories: Storage Appliance Hardware

Tags: , , , , , , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: