The Future of High Availability in Software Defined Storage: Ceph and GlusterFS

Of any feature storage administrators could claim to be the most important of a SDS solution, it would arguably be High Availability (HA). To date, High Availability has been a challenge for many Software Defined Storage solutions because the traditional mechanism for High Availability failover requires the use of special hardware and the process of failover can be slow. Slow is “bad” for HA failover because if it takes too long it for storage access to come back online, it can cause VMs to lock up which then need to be rebooted.

Scale-out open storage technologies such as Ceph and Gluster take a fundamentally different approach and are in the process of changing the storage landscape.

Ceph and Gluster achieve High Availability by making multiple copies of data that are spread across multiple servers in a cluster to ensure there is no single point of failure. Turn off any node, or in some cases even multiple nodes, and there’s no downtime and near instantaneous fail-over of workloads. This is a major step forward because proprietary hardware and custom hardware is no longer required to achieve fast, reliable failover.

In our meetings with customers and partners we’re seeing Ceph quickly progressing to becoming the go-to standard for OpenStack deployments, while Gluster is becoming the defacto standard scale-out file storage platform for Big Data deployments. The RADOS object storage architecture underlying Ceph makes it ideal for virtual machine workloads and in contrast Gluster’s file based architecture is great for unstructured data such as documents, archive, and media so we see a long term future for both of these technologies.

So the key takeaway is that both technologies reduce the cost of deployment for highly available storage clouds by a significant factor. With Ceph and Gluster there is no need to purchase expensive proprietary block or file storage from vendors offering proprietary solutions. You get the speed, reliability and features you need like snapshots, cloning, thin provisioning and massive scalability without the vendor lock-in, all on commodity hardware which can be expanded with RAM and solid state drives (SSDs) to accelerate throughput and IOPS performance.

At OSNEXUS, we integrated Gluster into the platform in 2013 and today we’re focused on deep integration of the Ceph file-system so that a broader audience can setup, manage, and maintain their virtualized environments with ease. We’ll be rolling out the first version of QuantaStor (v3.14) with integrated Ceph management in November.

QuantaStor is a scale-out SDS solution that installs on bare metal server hardware or as a VM so that you don’t have to deal with the complexities typically associated with deploying, installing, and managing scale-out storage. For more information on how to get a copy of QuantaStor Trial Edition or QuantaStor Community Edition click here.

qs_ceph

Ceph on the QuantaStor SDS Platform 



Categories: Ceph, GlusterFS, Software Defined Storage, Storage Appliance Hardware

Tags: , ,

3 replies

  1. Will this allow replication across datacenters ?

    • With the initial release of QuantaStor (v3.14) with Ceph support there won’t be the ability to replicate across datacenters but we aim to remedy that in a follow-on update. In the interim one could replicate workloads using something like Veeam or VMware Replication or equivalent to replicate VMs across datacenters. Ceph has a snapshot model and a differential based import/export mechanism (rbd export-diff/rbd import-diff) that’s very similar to the ZFS send/recv commands we use for storage volume replication today. So the good news is that the tech we need is there to do replication across datacenters with Ceph, we just have to integrate it.

  2. Steve: not so much the VM replication but data replication that is the golden key. Sharing the data between DCs and presenting with GFS or OCFS would be awesome.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: