The Large Hadron Collider (LHC), one of the most complex experimental facilities ever built, is an underground ring roughly 17 miles (27 kilometers) in circumference, crossing through parts of both Switzerland and France, where speeding particles travel around at 99.99 percent the speed of light and smash into each other roughly 600 million times per second.
Scientists hope that the energy given off by the collisions will yield answers to questions such as the existence of extra dimensions in the universe or the nature of dark matter that appears to account for 27 percent of universal mass-energy.
Smashing atoms together generates not only lots of energy but also huge amounts of particle collision data at the rate of more than 25PB per year, according to the CERN IT department
In the early years of the LHC, the CERN IT staff deployed a unique storage strategy to handle the massive amount of scientific data, successfully scaling their storage system to 100PB. However, according to Daniel van der Ster and Arne Wiebalck in an IOPscience journal article, “in recent years, innovations from companies such as Google, Yahoo, and Facebook have demonstrated that the Big Data problems seen by other communities are approaching, and often surpassing those of the LHC.”
In response, CERN IT decided to investigate and leverage new storage technologies and they landed on Ceph. Through rigorous testing, the CERN team found that Ceph gave the administrator fine-grained control over data distribution, replication strategies, consolidation of object and block storage, and very fast provisioning of boot-from-volume instances using thin provisioning.
Smashing Atoms to Day-to-Day Workloads
OSNEXUS sees Ceph as an ideal solution for not just for high performance computing (HPC) applications like they have at CERN but also for Virtual Machine (VM) workloads in the enterprise and large datacenters. Given the strong OpenStack integration with Ceph and the uptick in Ceph adoption in recent quarters, Ceph is definitely emerging as a key use case for storage technology.
When you look at many proprietary scale-out solutions, most have done a good job dealing with millions or billions of files but VM workloads are different. Making block devices scale-out with performance require the new approaches that Ceph employs to deliver highly available block-level storage that’s always available even in the event of a server outage.
“Ceph is a fantastic storage technology,” says Steve Umbehocker, CEO of OSNEXUS. “The teams at Redhat and Inktank have made a major contribution to the open source world and our goal is to make it easier for enterprises and cloud providers to adopt Ceph by integrating it into our enterprise SDS platform.”
OSNEXUS is planning to release its first version of QuantaStor with integrated Ceph support next month.