Capture

Deploying a High Availability Storage Cluster with GlusterFS

During the Paris OpenStack Summit earlier this month, Red Hat announced the latest version of GlusterFS, version 3.6.0, with new features including volume snapshots, erasure coding across GlusterFS volumes, improved SSL support, and rewritten automatic file replication code for improved performance.

Today, GlusterFS provides the speed, reliability and features such as snapshots, cloning, thin provisioning and massive scalability that can be expanded with RAM and solid state drives (SSDs) to accelerate throughput and IOPS performance.

As we’ve stated before, we believe that GlusterFS is becoming the defacto standard scale-out file storage platform for Big Data deployments as its file-based architecture is great for unstructured data ranging from documents and archives to media.

Online Upgrades, Mostly

When managing Big Data, the key feature is high availability. With multi-petabyte archives and potentially hundreds of client applications reading and writing files it’s typically very difficult to find a maintenance window where the storage can be offline for upgrades. But with cluster based solutions like GlusterFS you can upgrade hardware without imposing downtime on clients due to the replica based architecture of GlusterFS. Multiple replicas provides access to data even if one copy of the data on a given appliance node goes offline.

The trouble is that when updating GlusterFS software a coordinated upgrade across nodes may be required where a maintenance window is required. This is because the introduction of new features can at times be very difficult to synchronize while old versions of the software are running on other nodes. In general, the GlusterFS team has done a great job with the more recent versions but when looking at any storage deployment you’ll need to factor in a maintenance window, and if you can’t afford one, you’ll need to setup replication so that you have failover ability to a second storage cluster while the first one is being upgraded.

Boosting Efficiency with Erasure Coding

The downside to using replicas for high-availability is the dramatic drop in useable storage. With two copies of every file, only 50 percent of your storage is usable. And with three copies only 33 percent is usable. This means that if you have 10PB of files and you are going to maintain 2 copies of each file so that your solution is highly available, you will need to purchase 20PB of raw storage.

Erasure coding takes a different approach to delivering high-availability and fault-tolerance by using parity information so that your storage overhead can be as low as 10 percent in some cases. Therefore, instead of needing to buy 20PB of raw storage you will only need ~12PB.  For those familiar with RAID technology you can think of it as loosely similar to network RAID5. This is a new capability for GlusterFS and it’s critical for deployments that need to scale to 10s of petabytes as the cost in just raw hardware and power becomes a serious issue using the replica model.

Making GlusterFS Easy to Manage

QuantaStor takes a holistic approach to GlusterFS integration by bringing management, monitoring, and NFS/CIFS services together so that deployments can be done faster, easier, with point-click-provision simplicity.

Provisioning GlusterFS Volumes

Gluster Volumes are provisioned from the ‘Gluster Management’ tab in the QuantaStor web management interface. To make a new Gluster Volume simply right-click on the Gluster Volumes section or choose Create Gluster Volume from the tool bar (Figure 2).

To make a Gluster Volume highly available be sure to choose a replica count of two or three.  If you only need fault tolerance in case of a disk failure that is provided by the storage pools and you can use a replica count of one but if an appliance goes offline then that portion of the data will be inaccessible. With replica count of two or three your data is always available even in the event a node is taken offline.

Figure 2

Figure 2

Auto Healing

When the appliance is turned back on it will automatically synchronize with the other nodes to bring itself up to the proper current state via auto-healing. GlusterFS does the all the work for you by comparing the contents of the “bricks” and then synchronizing the appliance that was offline to make it bring it up to date.

High-Availability for Gluster Volumes

When using the native Gluster Client from a Linux server there is no additional steps required to make a volume highly-available as it will communicate with the server nodes to get the peer status information and will do the right thing. To see the commands to connect to your QuantaStor appliance via the native Gluster protocol, just right-click on the volume and choose ‘View Mount Command.’

When accessing your Gluster Volume via traditional protocols such as CIFS or NFS, additional steps are required to make the storage highly available because CIFS and NFS clients communicate with a single IP address.

If the appliance serving storage through an interface with that IP address is turned off, then the IP address must move to another node to ensure continued access to storage on that interface. QuantaStor natively provides this capability by allowing you to create virtual network interfaces for your Gluster Volumes that will float to another node automatically to maintain high-availability to your storage via CIFS/NFS in the event that an appliance is turned off.

OSNEXUS engineering is actively performing feature validation of GlusterFS 3.6 and the new erasure coding features. We look forward to releasing an updated version of QuantaStor in early 2015 with erasure coding support to leverage this new jump in efficiency it provides.

For more in-depth technical information on Managing Scale-out GlusterFS Volumes see the OSNEXUS administrators’ guide.

GlusterFS High Availability
Datacenter-telecom_rectilinear_cropped

Planning a Disaster Recovery Strategy: Automated Backup Policies for Software Defined Storage

From a business process perspective, a “disaster” needs to encompass everything from application downtime or hardware failures to computer viruses and hackers that cause business disruptions with economic consequences that may be just as impactful as a fire or flood. The core pillars of any disaster recovery strategy must take into consideration the following:

  • Assessing business exposure to a wide range of business disruptions
  • Reviewing storage options for preparation and recovery
  • Setting recovery expectations and data polices that inform storage backup priority decisions
  • Establishing automated backup policies and a testing plan for vulnerabilities and downtime

In today’s heterogeneous storage environments, storage resources may be spread across on premises data centers or cloud storage pools ranging from proprietary storage systems, Windows servers and open storage systems running on Linux. For this reason a backup strategy needs to account for the various types of critical information that must be backed up segmented by business function such as legal, marketing, finance, engineering or health and medical data.

Automating Backups with Software Defined Storage

Within QuantaStor you can create backup policies that will automatically backup CIFS/NFS shares on your network to your QuantaStor appliance.  Whether the share is on a 3rd party NAS filer, Linux, Windows, or other server presenting NFS or CIFS shares, QuantaStor Backup Policies make implementing a DR strategy easy.

Configuration

To create a backup policy in a QuantaStor appliance right-click on the Network Share where you want the data to be backed up to and choose the ‘Create Backup Policy’ option (Figure 1).  From here you’ll select the CIFS/NFS share on your network to be backed up, and the times at which you want the backup jobs to run.

When the backup policy runs it will attach to the specified CIFS/NFS share on your network to access the data to be archived. When the backup starts, QuantaStor creates a “Backup Job” so you can track the progress of any given backup. Simply select the Backup Jobs tab in the center-pane of the web interface after you select the network share to which the backup policy is attached.

Create Backup Policy

Figure 1

Parallelized Backup for Big Data

Backup policies in QuantaStor also support heavy parallelism so that very large NAS appliances with 100m+ files can be easily scanned for changes. This feature was specifically designed for a life sciences company that had so many files (over 300m) that they could not scan the entire data set within their backup window using traditional backup products and techniques. By default, QuantaStor backup policies use parallelism (up to 64 concurrent “scan+copy” threads) and has a major impact on reducing the backup window for Big Data scenarios.

Sliding Windows

Backup policies back up everything by default, but you can also opt to back up only recently created files and modified files using a ‘Sliding Window Backup’. When backing up data from Big Data archives with hundreds of millions or even billions of files it is sometimes useful or necessary to only backup and maintain a data subset. This is especially helpful for scenarios where there’s more data to be backed up than may be available in your QuantaStor appliance.

For example, if you set the data retention period of your Backup Policy to 60 days then all files that have been created or modified within the previous 60 days will be retained. There’s also a purge rule that by default is set to remove files that are older than the retention period from the backup folder (Figure 2).

Backup Policies 3rd Party

Figure 2

Consolidating Backups

Backup polices can also be configured to automatically consolidate or aggregate storage backups from remote network shares into a single network share on one QuantaStor appliance based on departmental data to comply with compliance-mandated storage retention policies (Figure 3).

Aggregated Backup

Figure 3

Executing Failover

With your Backup Policy in place and running automatically throughout the day you can then use various techniques to failover to a QuantaStor SDS appliance in the event of an outage of a NAS appliance. The easiest is to make a DNS change to assign the IP address of the NAS filer’s hostname (/ FQDN) to the QuantaStor appliance. This will map all existing client connections to the appliance in a transparent manner. The other option is to reconfigure the clients manually to have them reconnect to their network shares using the IP address or hostname of the QuantaStor appliance where the backup copy resides, but that’s less efficient if you are supporting many clients.

For more technical information regarding automated backup policies and disaster recovery please visit the QuantaStor Administrators Guide on the OSNEXUS Wiki.

Disaster Recovery High Availability Software Defined Storage
CERN Atoms

From Smashing Atoms to Remote Replication: Using Ceph for Highly Available Scale-out Storage

The Large Hadron Collider (LHC), one of the most complex experimental facilities ever built, is an underground ring roughly 17 miles (27 kilometers) in circumference, crossing through parts of both Switzerland and France, where speeding particles travel around at 99.99 percent the speed of light and smash into each other roughly 600 million times per second.

Scientists hope that the energy given off by the collisions will yield answers to questions such as the existence of extra dimensions in the universe or the nature of dark matter that appears to account for 27 percent of universal mass-energy.

Smashing atoms together generates not only lots of energy but also huge amounts of particle collision data at the rate of more than 25PB per year, according to the CERN IT department

In the early years of the LHC, the CERN IT staff deployed a unique storage strategy to handle the massive amount of scientific data, successfully scaling their storage system to 100PB. However, according to Daniel van der Ster and Arne Wiebalck in an IOPscience journal article, “in recent years, innovations from companies such as Google, Yahoo, and Facebook have demonstrated that the Big Data problems seen by other communities are approaching, and often surpassing those of the LHC.”

In response, CERN IT decided to investigate and leverage new storage technologies and they landed on Ceph. Through rigorous testing, the CERN team found that Ceph gave the administrator fine-grained control over data distribution, replication strategies, consolidation of object and block storage, and very fast provisioning of boot-from-volume instances using thin provisioning.

Smashing Atoms to Day-to-Day Workloads

OSNEXUS sees Ceph as an ideal solution for not just for high performance computing (HPC) applications like they have at CERN but also for Virtual Machine (VM) workloads in the enterprise and large datacenters. Given the strong OpenStack integration with Ceph and the uptick in Ceph adoption in recent quarters, Ceph is definitely emerging as a key use case for storage technology.

When you look at many proprietary scale-out solutions, most have done a good job dealing with millions or billions of files but VM workloads are different. Making block devices scale-out with performance require the new approaches that Ceph employs to deliver highly available block-level storage that’s always available even in the event of a server outage.

“Ceph is a fantastic storage technology,” says Steve Umbehocker, CEO of OSNEXUS. “The teams at Redhat and Inktank have made a major contribution to the open source world and our goal is to make it easier for enterprises and cloud providers to adopt Ceph by integrating it into our enterprise SDS platform.”

OSNEXUS is planning to release its first version of QuantaStor with integrated Ceph support next month.

Ceph QuantaStor

Why Enterprises Need a Software Defined Storage Strategy for Cybersecurity Forensics

Earlier this month yet another major corporation reported that hackers managed to breach their firewall and steal information on millions of customers. This time it was JPMorgan Chase and luckily the breach was discovered quickly with no confidential information taken.

“Security vendors and practitioners need to develop better products and processes that automate ongoing analytical tasks, similar to the actions taken by JPMorgan Chase’s security analysts,” wrote WIRED magazine in a story covering the breach. “Products need to more accurately identify known breaches and eliminate the huge volume of noise produced by traditional security defense infrastructure.”

While the “noise” of traditional security defenses may be on the rise, at times in the form of alerts that include false positives indicating intrusions or malicious code, detecting a hacker’s digital footprints once a breach has been discovered involves scanning all packets that made it past the data center firewall and anything deposited into back-end storage systems.

For this reason, a good storage strategy is key when it comes to cybersecurity forensics post intrusion detection. Packet-capture and recording software should have policies that trigger the allocation of scale-out storage pools on demand whether they are on premises on or in the cloud.

WildPackets estimates that capturing all traffic across a fully utilized 1G network or 10G network utilized at 1Gbps would produce 11TB of data per day and a 10x increase on a fully utilized 10G network would produce 110TB per day.

An enterprise with a dedicated 100TB storage appliance would enable an investigator to go back in time almost 24 hours on a fully utilized 10G network but if the breach occurred during the past week or even month, the storage requirements can become quite large and outstrip allocated storage resources depending on what security compliance policies the enterprise may have in place.

The ability to investigate captured network traffic 24/7 that goes far back in time with policies that purge data at set intervals such as 30, 60 or 90 days must take into consideration a scale-out Software Defined Storage strategy and architecture.

Cybersecurity forensic examiners also need to be be notified when data recording to a dedicated storage pool starts or stops in response to a known breach or malicious code that has made it past a data center firewall.

QuantaStor has script call-outs that can be used to extend the functionality of OSNEXUS SDS appliances for integration with custom applications including packet capture software and network monitoring appliances.

In our next post on the topic of Cybersecurity Forensics and Software Defined Storage we’ll outline best practices for continuous data capture in the event of a security breach.

Cybersecurity Software Defined Storage

The Future of High Availability in Software Defined Storage: Ceph and GlusterFS

Of any feature storage administrators could claim to be the most important of a SDS solution, it would arguably be High Availability (HA). To date, High Availability has been a challenge for many Software Defined Storage solutions because the traditional mechanism for High Availability failover requires the use of special hardware and the process of failover can be slow. Slow is “bad” for HA failover because if it takes too long it for storage access to come back online, it can cause VMs to lock up which then need to be rebooted.

Scale-out open storage technologies such as Ceph and Gluster take a fundamentally different approach and are in the process of changing the storage landscape.

Ceph and Gluster achieve High Availability by making multiple copies of data that are spread across multiple servers in a cluster to ensure there is no single point of failure. Turn off any node, or in some cases even multiple nodes, and there’s no downtime and near instantaneous fail-over of workloads. This is a major step forward because proprietary hardware and custom hardware is no longer required to achieve fast, reliable failover.

In our meetings with customers and partners we’re seeing Ceph quickly progressing to becoming the go-to standard for OpenStack deployments, while Gluster is becoming the defacto standard scale-out file storage platform for Big Data deployments. The RADOS object storage architecture underlying Ceph makes it ideal for virtual machine workloads and in contrast Gluster’s file based architecture is great for unstructured data such as documents, archive, and media so we see a long term future for both of these technologies.

So the key takeaway is that both technologies reduce the cost of deployment for highly available storage clouds by a significant factor. With Ceph and Gluster there is no need to purchase expensive proprietary block or file storage from vendors offering proprietary solutions. You get the speed, reliability and features you need like snapshots, cloning, thin provisioning and massive scalability without the vendor lock-in, all on commodity hardware which can be expanded with RAM and solid state drives (SSDs) to accelerate throughput and IOPS performance.

At OSNEXUS, we integrated Gluster into the platform in 2013 and today we’re focused on deep integration of the Ceph file-system so that a broader audience can setup, manage, and maintain their virtualized environments with ease. We’ll be rolling out the first version of QuantaStor (v3.14) with integrated Ceph management in November.

QuantaStor is a scale-out SDS solution that installs on bare metal server hardware or as a VM so that you don’t have to deal with the complexities typically associated with deploying, installing, and managing scale-out storage. For more information on how to get a copy of QuantaStor Trial Edition or QuantaStor Community Edition click here.

qs_ceph

Ceph on the QuantaStor SDS Platform 

Ceph GlusterFS Software Defined Storage Storage Appliance Hardware
quantastor storage encryption

QuantaStor 3.13 now available featuring enhanced encryption management and one-step GlusterFS peering

The team at OSNEXUS has been hard at work this summer on the latest release of QuantaStor and today I’m happy to announce that QuantaStor 3.13 is now generally available with new encryption features, one-step GlusterFS peering and inclusion of the latest maintenance releases of ZFS (v.6.3) and GlusterFS (v3.5.2).

Security has always been an important focus for QuantaStor and now we’ve made it even easier to administer and manage encryption at both the Linux OS level with LUKS and at the physical drive level through the QuantaStor command line interface (CLI).

At the software level, QuantaStor now uses the LUKS (Linux Unified Key Setup) system for key management and comes with tools to greatly simplify the configuration and setup of encryption.

The QuantaStor qs-util CLI utility comes with a series of additional commands to encrypt disk devices including cryptformat, cryptopen, cryptclose, cryptdestroy, and cryptswap. There’s also a devicemap command which will scan for and display a list of devices available on the system. You can read more about setting up LUKS software storage encryption management here on the Wiki.

encrypted drives

QuantaStor 3.13 showing encrypted drives

From a hardware encryption perspective, QuantaStor now allows you to administer and manage an encrypted RAID controller directly through the qs CLI. There are three CLI commands for setting up hardware encryption using the ‘qs’ command line utility. They are ‘hw-unit-encrypt’, ‘hw-controller-create-security-key’, and ‘hw-controller-change-security-key.’ Read more about configuring QuantaStor drive encryption here.

Gluster Peer Setup

Setting up QuantaStor appliances into a grid allows them to intercommunicate but it doesn’t automatically setup the GlusterFS peer relationships between the appliances. For that we’ve created the new one-step ‘Peer Setup’ dialog in the Web Interface enabling the selection of the IP address on each node that you want Gluster to use for intercommunication between the nodes for Gluster operations.

The benefit of using Peer Setup in QuantaStor is ensuring that the configuration is kept in sync across the nodes and allowing the nodes to resolve names even if DNS server access is down. Read more on automated Peer Setup here.

gluster peer setupGluster Automated Peer Setup

GlusterFS QuantaStor 3.13 Security