Over the last 6 years we’ve seen LSI be acquired by Avago which was then acquired by Broadcom but unfortunately we’ve not seen many innovations in hardware RAID. I’d been hoping to see things like bit-rot correction and smart drive rebuilds come to market but it’s still not there and probably isn’t coming. At OSNEXUS we still use hardware RAID1 to mirror together the boot drives. That means using a board with a LSI 3108 chip either in an OEM add-on card or via an add-on card.
Today, if you have a Cisco, Intel, Dell, Fujitsu, or a SuperMicro server, they all use the OEM versions of the LSI MegaRAID or LSI HBA chips (3008) and we’re seeing LSI standardize on the storcli utility for both rather than maintaining separate CLIs. HPE is the only hold-out with their own custom hardware RAID card but we see that too being phased out in the Gen10 servers for Adaptec derivative boards.
As far as CacheCade is concerned, I think it’s usefulness has past. SSD is cheap so why bother accelerating HDDs when one can build the whole array out of SSDs for not much more and much higher performance. With the 94xx series Broadcom is jumping into the NVMe space (connect 4x per card) which is great and these tri-mode cards can act as a HW RAID adapter, or HBA, or NVMe adapter all-in-one. That’s a great way to go and consolidates and simplifies the product line for Broadcom. Hardware RAID is going to be around for some time to come but don’t look to it for great innovations, for that, keep your eye on the software side with technologies like ZFS and Ceph.
If you haven’t heard of CacheCade it’s a hybrid SSD caching technology from LSI for the MegaRAID Series (9260/9280/9266/9286/9265/9285) RAID controllers that greatly accelerates the performance (IOPS) of SATA/SAS RAID arrays by layering in a SSD based read/write cache layer. This hybrid blending of traditional HDDs with SSDs is can boost performance for some applications such as server visualization, video editing, and databases by upwards of 12x or more. Here’s a chart from an LSI tech brief that shows how performance increases and latency decreases with the SSD caching layer:
The trick to make SSD caching effective is to design a hybrid cache so that it can utilize the SSD as much as possible so that you’re not slowed by the mechanical latencies associated with the HDDs, yet at the same time take advantage of the high sequential performance of large RAID arrays. LSI designed that intelligence into CacheCade so that hot spots for reads in HDDs arrays are cached in SSD and random writes are written to SSD storage first so that they can be lazily written out to the HDD in an optimized fashion. (Of course for writes to be cached you must have two or four SSD drives in a CacheCade 2.0 mirrored volume so in case of an SSD drive failure there’s no data loss. Side note, CacheCade 1.0 did only read caching)
For an indepth performance analysis here’s an interesting article over at The SSD Review that goes into performance metrics.
On to the details of deploying CacheCade on your Linux server.. we use Ubuntu Server as the underlying OS with our QuantaStor storage appliance software so I’m going to share with you a Linux perspective of how to get your server configured with CacheCade Pro 2.0.
Note, you’ll want to do the following commands as root, so if you’re on a Ubuntu system be sure to do this first:
Also note that the following steps will require some packages to be installed and here’s how to do that on Ubuntu:
apt-get install alien gcc dkms wget
First step, upgrade the firmware. As of the writing of this post the latest firmware for CacheCade 2.0 is 12.13 which you can download here and you most likely have v12.12 firmware on your MegaRAID controller:
- CacheCade Pro 2.0 firmware for 9260-8i, 9260DE-8i, 9261-8i, 9280-4i4e, 9280-8e, 9280DE-8e
- CacheCade Pro 2.0 firmware for 9280-24i4e, 9280-16i4e, 9260-16
Alternatively you can skip the above and download it directly to your linux server with this command, in fact, I’m going to use the wget route for the rest of the article:
wget http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/12.13.0-0104_SAS_2108CC_FW_Image_APP-2.130.03-1332.zip unzip 12.13.0-0104_SAS_2108CC_FW_Image_APP-2.130.03-1332.zip
or if you have one of the high port count models (-24i4e, -16):
wget http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/12.13.0-0105_SAS_High_Port_Count_CC_FW_Image_APP-2.130.03-1332.zip unzip 12.13.0-0105_SAS_High_Port_Count_CC_FW_Image_APP-2.130.03-1332.zip
or if you have a 9265-8i:
wget http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/23.7.0-0031_SAS_FW_IMAGE_APP_3.190.15-1686.zip unzip MegaRAID%20Common%20Files/23.7.0-0031_SAS_FW_IMAGE_APP_3.190.15-1686.zip
Now you can install the firmware using the MegaCLI and if you don’t have it you can download it here:
I’m not a big fan of the MegaCLI but the good news is that LSI is making a new/better one this year that’s based on the old LSI 3ware CLI so we’re looking forward to that. One other hurdle, the MegaCLI isn’t packaged in .deb format so you’ll need to use the ‘alien’ tool to convert the rpm package to a .deb package so you can install. Once you have the CLI installed updating the firmware is easy:
cd /opt/MegaRAID/MegaCli ./MegaCli64 -adpfwflash -f mr2108fw.rom -a0
Once completed you’re up to the 12.13 firmware and your hardware can now support CacheCade 2.0. We use the hardware key so if you have one go ahead and power down and add that to your controller if you haven’t done so already.
Next up you need to move to the latest LSI MegaRAID driver, specifically 00.06.12 or newer.
In there you’ll find packages for many kernels but in some cases like ours we needed to build the driver. It’s not so bad..
tar xvfz 6.12_Linux_Driver_components.tgz mv megaraid_sas-v00.00.06.12 /usr/src dkms add -m megaraid_sas -v v00.00.06.12 dkms build -m megaraid_sas -v v00.00.06.12 dkms install -m megaraid_sas -v v00.00.06.12
That’s really all there is to it. DKMS is a tool for building Linux kernel drivers that was originally developed by Linux engineers at Dell. It’s been around for a long time and it’s great because it makes it so that you can build and install new drivers for Linux without a kernel rebuild.
Once the driver is installed and the firmware is upgraded you should restart the system and then verify that the new driver is running after the reboot by running:
That should display info about the 06.12 driver, if not and it’s showing an older driver then something went wrong with the driver upgrade.
On to configuration. So you should have two or four SSD drives for use with CacheCade so that you can use it both for write caching and read caching. The performance benefits are so big with adding the write cache that it’s a huge waste for you not to be using it in that capacity so the following outlines how to configure using 4 drives. First, you’ll need to locate the SSD drives you want to use to make the CacheCade volume:
./MegaCli64 -pdlist -a0
Once identified you can create the CacheCade volume like so:
./MegaCli64 -CfgCacheCadeAdd -r1 -Physdrv[25:0,25:3,25:6,25:9] -a0
Your choices are -r1 or -r0 but as noted above you’ll always want to use -r1 (mirroring) with 2 or 4 drives. The version of CacheCade we have is limited to 500GB so you’ll want to get a pair of 240GB or four 120GB SSD drives. Next you can assign the CacheCade volume to one or all of your HDD based Virtual Devices like so:
./MegaCli64 -Cachecade -assign -L2 -a0
To unassign you can do the reverse like so:
./MegaCli64 -Cachecade -remove -L2 -a0
To remove the CacheCade device altogether you first need to identify the logical drive and then you can can run this:
./MegaCli64 -LDInfo -Lall -a1 | grep "CacheCade" CacheCade Virtual Drive: 1 (Target Id: 1) Virtual Drive Type: CacheCade ./MegaCli64 -CfgCacheCadeDel -L1 -a0
And last (a plug for QuantaStor), if you’re looking to build a storage appliance which will take great advantage of your new LSI controller (and make it so you don’t have to do any of the above except the firmware bit), I invite you to try out OS NEXUS QuantaStor. We’ve fully integrated it with MegaRAID CacheCade Pro 2.0 so you can create RAID arrays and CacheCade SSD devices quickly and easily and I’ll cover that in more detail in another article. The current release available on our web site is v2.1 but here’s a link to an early release of v2.5 which has all the CacheCade integration. (Note, when the official v2.5 release comes out next month you can just use the Upgrade Manager inside the QuantaStor Manager web interface to move up to the official v2.5 release build.)
Next posting I’ll be diving into some other new features coming in v2.5 and a bit on how to configure CacheCade within QuantaStor,
Categories: Storage Appliance Hardware