Ceph has become the most popular scale-out open-source storage technology and is being used in more than 56% of all OpenStack deployments. Ceph storage configurations can scale-out from small 3x to hyper-scale with 100x or more storage servers enabling organizations to boost capacity and performance with almost no limits. At the same time, the auto-healing and intelligent rebalancing technologies within Ceph make it easy to replace old systems and add or remove storage with ease.
The most common complaints we hear about Ceph is in its complexity. Trying to operationalize, manage, and monitor it over time using low level CLI tools and scripts requires deep expertise; many organizations don’t have time to dedicate to deploying and maintaining Ceph. That combined with the days and/or weeks it often takes to deploy and operationalize is a major barrier to adoption for Ceph in the enterprise. OSNEXUS integrated Ceph into its QuantaStor SDS platform starting in 2015 to solve exactly these problems.
QuantaStor makes Ceph turn-key and deployable in minutes with little to no knowledge of Ceph or Linux. Setting up a Ceph cluster within QuantaStor can be done in roughly a dozen clicks, allowing storage administrators to take advantage of the powerful benefits and features of Ceph without having to learn and deal with its complexities. Check out our previous post The Easy Button for Ceph Deployments for more on how Ceph and QuantaStor SDS are advancing software defined storage.
In the short demo video below by OSNEXUS’ CEO, Steve Umbehocker, Steve shows how easy it is to deploy Ceph based object storage with QuantaStor SDS from start to finish completely through the QuantaStor web management interface. Topics covered in the video include:
- How to setup the front-end and back-end networks
- How to verify the appliance network domain suffix and NTP servers are set correctly
- How to create a new Ceph cluster using any number of appliances within a QuantaStor storage grid
- See in real-time as QuantaStor sets up the Ceph cluster and monitors
- How to add additional monitors as needed depending on how many nodes are in the cluster
- How to add all the storage (OSDs and journals) to the cluster in a single step
- How to enable S3 object storage access by creating an object pool group with a specific data layout type (replica, erasure coding)
- How to manage S3 user access by adding/removing object storage accounts
Categories: Storage Appliance Hardware