Of any feature storage administrators could claim to be the most important of a SDS solution, it would arguably be High Availability (HA). To date, High Availability has been a challenge for many Software Defined Storage solutions because the traditional mechanism for High Availability failover requires the use of special hardware and the process of failover can be slow. Slow is “bad” for HA failover because if it takes too long it for storage access to come back online, it can cause VMs to lock up which then need to be rebooted.
Scale-out open storage technologies such as Ceph and Gluster take a fundamentally different approach and are in the process of changing the storage landscape.
Ceph and Gluster achieve High Availability by making multiple copies of data that are spread across multiple servers in a cluster to ensure there is no single point of failure. Turn off any node, or in some cases even multiple nodes, and there’s no downtime and near instantaneous fail-over of workloads. This is a major step forward because proprietary hardware and custom hardware is no longer required to achieve fast, reliable failover.
In our meetings with customers and partners we’re seeing Ceph quickly progressing to becoming the go-to standard for OpenStack deployments, while Gluster is becoming the defacto standard scale-out file storage platform for Big Data deployments. The RADOS object storage architecture underlying Ceph makes it ideal for virtual machine workloads and in contrast Gluster’s file based architecture is great for unstructured data such as documents, archive, and media so we see a long term future for both of these technologies.
So the key takeaway is that both technologies reduce the cost of deployment for highly available storage clouds by a significant factor. With Ceph and Gluster there is no need to purchase expensive proprietary block or file storage from vendors offering proprietary solutions. You get the speed, reliability and features you need like snapshots, cloning, thin provisioning and massive scalability without the vendor lock-in, all on commodity hardware which can be expanded with RAM and solid state drives (SSDs) to accelerate throughput and IOPS performance.
At OSNEXUS, we integrated Gluster into the platform in 2013 and today we’re focused on deep integration of the Ceph file-system so that a broader audience can setup, manage, and maintain their virtualized environments with ease. We’ll be rolling out the first version of QuantaStor (v3.14) with integrated Ceph management in November.
QuantaStor is a scale-out SDS solution that installs on bare metal server hardware or as a VM so that you don’t have to deal with the complexities typically associated with deploying, installing, and managing scale-out storage. For more information on how to get a copy of QuantaStor Trial Edition or QuantaStor Community Edition click here.
Ceph on the QuantaStor SDS Platform

Leave a reply to stevenu Cancel reply