VMware_Workstation_11.0_icon

Creating iSCSI Datastores in VMware with QuantaStor SDS

Server and Desktop virtualization with VMware is one of the most popular storage use cases for QuantaStor SDS. QuantaStor, certified for use with VMware ESXi, makes it easy to access iSCSI targets using the software iSCSI adapter built into VMware. The great thing about using iSCSI is that it’s fast, supports multipathing for increased reliability and provides better performance without the need for specialized hardware as would be the case with Fibre Channel or Infiniband.

In this article we’ll be going over the basics of getting VMware configured so that you can start provisioning VMs using a QuantaStor-based VMware Datastore using iSCSI storage.

Creating the VMware iSCSI Software Adapter

Creating a Datastore using QuantaStor and VMware ESXi is a multi-step process that requires a vSphere client as well as a QuantaStor appliance. First, you need to add your iSCSI adapter to the VMware host by clicking on the “Configuration” tab in your vSphere client. Next click on “Storage Adapters” in the “Hardware” left window pane, click on “Add” and then select “Add Software iSCSI Adapter.” (Figure 1)

addiscsiadapter

 Figure 1

After you’ve added the iSCSI Software Adapter, bring up the iSCSI Initiator Properties dialog box by right clicking on the highlighted iSCSI adapter. Find and copy the IQN. (Figure 2)

iqnFigure 2

Assigning Storage Volumes from QuantaStor to VMware Servers

Next, add the VMware host to your QuantaStor appliance by going to “Hosts” and then adding a new host. In the “Add Host” dialog box, paste the IQN from the VMware host next to“iSCSI Qualified Name (IQN)” under “Initiator” and click “OK.” (Figure 3)

addiqntoquantastor

Now that you’ve added the VMware host to the QuantaStor appliance, you can assign a storage volume to it by clicking on “Assign” and then selecting a Volume. Note that QuantaStor automatically adds a VMware “eui” number to each Volume so that it is recognized by the VMware host.  (Figure 4)

figure4

 Figure 4

Verifying Access to the Assigned Storage

The next step is to map the VMware host to the target iSCSI server so that the QuantaStor Volumes can be seen in the iSCSI Software Adapter Details section as in Figure 5.

figure5

Figure 5

Once again, right click the highlighted iSCSI adapter or click on the “Properties” link in the bottom right of the vSphere client to bring up the iSCSI Initiator Properties dialog box. Click “Dynamic Discovery” and then add the IP address of the QuantaStor appliance to iSCSI Server in the “Add Send Target Server” dialog box. (Figure 6) Click “Rescan All” and in the detail section you should now see the QuantaStor storage volumes and their associated eui identifiers as seen in Figure 6.

figure6

Figure 6

Creating a VMware Datastore

After setting up the iSCSI adapters and associating the QuantaStor appliance target and volume, you can now create a Datastore on the QuantaStor iSCSi Disk. Datastores serve as repositories for virtual machines and can be provisioned to users and groups. To configure a Datastore, click on “Storage” in the left window pane and then “Add Storage.” (Figure 7)

figure7

Figure 7

Select “Disk/LUN,” the term in the ESXi context, meaning an iSCSI volume presented to your host from a storage target and available for formatting, and click “Next.”(Figure 8)

figure8

Figure 8

Select “VMFS-5” and click “Next.” (Figure 9)

figure9

 Figure 9

Check your Current Disk Layout and click “Next.” (Figure 10)

figure10

 Figure 10

 Enter a Datastore name and click “Next.” (Figure 11)

figure11

Figure 11

Select the desired amount of storage capacity and click “Next.” (Figure 12)

figure12

Figure 12

Click “Finish.” (Figure 13)

figure13

Figure 13

You will now see your Datastore listed in the Datastore window. (Figure 14)

figure14

Figure 14

Conclusion

Now you can begin provisioning storage from your new Datastore(s) for existing and new VMs. For more information about managing vSphere storage for ESXi click here.

virtualization vmware

Assigning Network Share Ownership Using Active Directory

Creating Network Attached Storage (NAS) Shares with Active Directory using QuantaStor has always been an easy process, and now with the latest QuantaStor release we’ve made Active Directory integration even easier by adding new security policies that let storage administrators assign owners to specific group shares with just a few mouse clicks.

Microsoft’s Active Directory (AD), included in most Windows Server operating systems, is a directory service for Windows domain networks that acts as an information hub for the operating system.

Active Directory brokers relationships between distributed resources such as user account information, email address books, firewalls or network devices. One example would be an Active Directory domain controller authenticating and authorizing all users and computers in a Windows domain network by assigning and enforcing security policies for computers and installing or updating software.

Joining an Active Directory Domain

With QuantaStor, storage appliances can be joined to Active Directory (AD) domains so that CIFS Network Share access is granted to specific AD users and AD groups.

To join a domain go to the “Network Shares” menu, select “Configure CIFS” in the top ribbon bar, or by right clicking in the “Network Shares” space and selecting “Configure CIFS Services” from the context window (Figure 1). Check the box to enable Active Directory and provide the necessary information. If you have any problems joining the domain verify that you can ping the IP address of the domain controller and that you are also able to ping the domain itself.

domain join

Figure 1

Managing CIFS Owner Access

CIFS access can be controlled on a per user basis with the ability to assign Owner access to specific shares from within the Modify Network Share window. (Figure 2)

cifsowneraccess

Figure 2

You can also select the different users/groups that are present within a domain. This can be done the same way as using the QuantaStor users, but by selecting “AD Users” or “AD Groups.” You can set the access to either “Valid User,” “Admin User,” or “Invalid User.” (Figure 3)

modifynetworkshare
Figure 3

Finally, you can set permission at the file level by clicking on the on “File Permissions” tab. (Figure 4)

 adpermissions
Figure 4

Leaving an Active Directory Domain

To leave a domain open the “Network Shares” section and select “Configure CIFS” in the top ribbon bar, or by right clicking in the “Network Shares” space and selecting “Configure CIFS Services” from the context window. Unselect the checkbox to disable active directory integration. If you would like to remove the computer from the domain controller you must also specify the domain administrator and password. After clicking “OK” QuantaStor will then leave the domain. (Figure 5)

cifsconfiguration

Figure 5

Active Directory Security

Configuring SNMP Alert Notifications for QuantaStor Appliances

QuantaStor has always allowed for remote system monitoring using SNMP and now with the release of QuantaStor v3.15 we’ve enhanced the platform’s alerting and notification capabilities as well as security and privacy by including SNMPv3.

For those unfamiliar with Simple Network Management Protocol (SNMP), it’s an Internet-standard protocol to monitor network-attached devices that need administrative attention. SNMP supports devices such as routers, switches, servers, storage, workstations, printers, and modem racks. The SNMP protocol is an easy way to collect and organize information about managed devices and then lets admins make modifications.

By default the QuantaStor SNMP agent is turned off but you can enable it from your Linux terminal with two commands:

sudo qs-util snmpenable
sudo qs-util snmpactivate

You’ll also need to install the SNMP package that contains the snmpwalk and snmpget utilities for testing the agent.

sudo apt-get install snmp

For more information on configuring and testing the SNMP Agent visit the OSNEXUS Wiki here. More on SNMP Utility Commands can be found here.

Configuring SNMPv2 Traps

One of the advantages of SNMP is that it enables an agent to asynchronously notify QuantaStor appliances of significant events by way of messages or alerts where the client always actively requests information from the server. SNMPv2, also included in QuantaStor v3.15, allows the additional use of so-called “traps.” These are data packages that are sent from the SNMP client to the server without being explicitly requested.

quantastor snmp agent

SNMP Trap Configuration

Destination addressing for traps is determined in an application-specific manner typically through trap configuration variables in the management information base or “MIB,” the database used for managing the entities in QuantaStor. The QuantaStor MIB can be downloaded here.

By default the SNMP agent pushes out traps every 120 seconds. QuantaStor only raises traps for Alert objects, so anything you see in the QuantaStor web interface Alert Status Bar or in the ‘qs alert-list’ will be sent out as traps. You can find more information about configuring QuantaStor SNMP Trap settings here.

Monitoring Traps with OpenNMS

With QuantaStor there are two options for monitoring SNMP events and alerts. The first is through a Linux terminal and the other is to use a network management application that supports SNMP.  OpenNMS is an example of a free network management application platform that can be configured and used with QuantaStor as a SNMP Trap receiver.

OpenNMS

OpenNMS Monitoring QuantaStor SNMP Traps

 SNMPv3 and Security

SNMPv3, also included in QuantaStor v3.15, focuses on security and administration by offering both strong authentication and data encryption for privacy, notification originators and proxy forwarders. The protocol includes three important security features:

  • Confidentiality – Packet encryption to prevent traffic decoding
  • Integrity – Ensures that packets have not been tampered with while in transit and includes an optional packet replay protection mechanism
  • Authentication – Verifies that messages are from a trusted source

While security requirements vary between organizations, care should be taken in common environments such as mixed-tenant datacenters, server hosting and colocation facilities. The following article outlines the relative security strengths and weaknesses of SNMPv1/v2/v3.

Security SNMP Traps
httpsimg768

Enabling Strong Storage Security with TLS

Maintaining secure systems is not just an ongoing task but an increasingly important focus and practice for all IT organizations. The key for software vendors is providing timely updates that ensure strong security enforcement and keep up with the constantly changing threat landscape.

With the recent activity around SSL and news about weak ciphers we went through an audit of our OpenSSL use and QuantaStor’s default security configuration. As a result, QuantaStor 3.15 now enforces strong security on the front end web interface with TLS 1.2 and TLS 1.0 for back-end protocol communication.

Existing customers will automatically get these new security upgrades with zero downtime by upgrading to the new release. QuantaStor 3.15 also introduces enhancements allowing for customization of security keys, certificates and certificate authority configuration files so that appliances can be customized to meet company specific security compliance policies.

What Happened to SSL?

As you may have read, the SSL 3.0 protocol has been replaced by the Transport Layer Security (TLS) protocol, and is considered outdated or “dead” due to it security vulnerabilities. This includes man-in-the-middle attacks using “POODLE” or “Padding Oracle on Downgraded Legacy Encryption” where an attacker can gain access to passwords, cookies and other authentication tokens passed within the encrypted web session.

According to the U.S. Computer Emergency Readiness Team, “the POODLE attack can be used against any system or application that supports SSL 3.0 with CBC mode ciphers. This affects most current browsers and websites, but also includes any software that either references a vulnerable SSL/TLS library (e.g. OpenSSL) or implements the SSL/TLS protocol suite itself.”

Even before POODLE was released, the U.S. Department of Commerce mandated in a NIST publication that SSL 3.0 not be used for sensitive government communications or for HIPAA (Health Insurance Portability and Accountability Act) compliant communications.

The key takeaway here is that you should no longer be using SSL 3.0, only TLS 1.0 and newer. Furthermore, there are specific ciphers that are to be avoided with TLS 1.x so it’s important that all servers and systems in your environment are using strong protocols with strong ciphers.

TLS and SSL Differences

TLS (Transport Layer Security) and SSL (Secure Sockets Layer) both provide data encryption and authentication between applications and servers across insecure networks. Although the terms SSL and TLS may be used interchangeably or together as in “TLS/SSL,” SSL 3.0 served as the basis for TLS 1.0.

Beyond SSL 3.0 is TLS 1.0, TLS1.1 and TLS1.2 with 1.2 offering the highest level of data protection. All TLS protocols are designed to prevent eavesdropping, tampering, or message forgery with the primary goal of providing privacy and data integrity between two communicating applications.

The security protocol has two layers: the TLS Record Protocol and the TLS Handshake Protocol with the TLS Record Protocol existing at the lowest level and layered on top of a transport protocol such as TCP. The TLS Record Protocol provides connection security that has two main objectives: ensure that the connection is private and the connection is reliable.

QuantaStor 3.15 Protocols and Security

By upgrading to QuantaStor 3.15 you harden security at three network facing service points:

  • Core management service
  • Apache Tomcat web server for the web interface
  • REST API service

All of the network facing components only communicate using TLS 1.0 and newer. They also are configured to only use strong ciphers. The table below shows the three services, the network ports they’re exposed on, the protocol used and the ciphers allowed.

Service Port Protocol Default Cipher List
Core Management Service 5152 TLS 1.0 Link
REST API Service 8153 TLS 1.0 Link
Tomcat Web Server 443 TLS 1.2* Link

*TLS 1.2 on Tomcat Web Server with upgrade to Java7 with script command.

An overview of security administrator commands are on the OSNEXUS Wiki here.

TLS and HIPAA Compliance

Another benefit of QuantaStor 3.15 is the ability to achieve HIPAA compliance for your storage appliances. The U.S. Department of Commerce NIST 800-52 publication covers what is needed for strong TLS encryption for government use as well as security requirements for HIPAA compliance.

To achieve HIPAA compliance with QuantaStor you must take the following steps:

  • Upgrade to QuantaStor 3.15
  • Use ciphers from the approved U.S. Department of Commerce list

In short, for a storage system to be HIPAA compliant, below are three key guidelines to follow:

  • SSL 3.0 cannot be used
  • TLS v1.0+ is OK to be used
  • Only ciphers on a recommended can be used

The allowed ciphers conforming to HIPAA compliance guidelines are:

DES-CBC3-SHA:AES128-SHA:AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-GCM-SHA256:AES128-SHA256:AES256-SHA256:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:DHE-DSS-DES-CBC3-SHA:DHE-DSS-AES128-SHA:DHE-DSS-AES256-SHA:DHE-DSS-AES128-SHA256:DHE-DSS-AES256-SHA256:DHE-DSS-AES128-GCM-SHA256:DHE-DSS-AES256-GCM-SHA384:DH-DSS-AES128-SHA:DH-DSS-AES256-SHA:DH-DSS-AES128-SHA256:DH-DSS-AES256-SHA256:DH-DSS-AES128-GCM-SHA256:DH-DSS-AES256-GCM-SHA384:ECDH-ECDSA-DES-CBC3-SHA:ECDH-ECDSA-AES128-SHA:ECDH-ECDSA-AES256-SHA:ECDH-ECDSA-AES128-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-GCM-SHA384

QuantaStor 3.15 Security
grid

QuantaStor v3.15 Released with High Availability Site Clustering for High-Performance Block and File Storage Grids

We are pleased to announce the latest release of OSNEXUS QuantaStor 3.15, an update that constitutes a major milestone as it includes improvements for HA ZFS storage pools, and performance and scalability enhancements that will greatly speed up all deployments.

In addition to its many new features, QuantaStor 3.15 was designed as an easy-to-upgrade, non-disruptive release. For those with existing deployments, you’ll see major performance improvements with lightning-fast boot up times and pool scans. Below are just a few 3.15 highlights:

High Availability

  • Advanced clustering features now allow QuantaStor appliances within a grid to be grouped into what we call “Site Clusters.” Within a Site Cluster you can create and manage High-Availability ZFS Storage Pools, Grid Management VIF, and GlusterFS VIFs.
  • Active Cluster Monitoring that displays node and cluster status across sites to oversee and troubleshoot cluster-wide issues
  • Cluster Ring Management displays active status of heartbeat rings within a Site Cluster

Scalability & Performance

  • Grid scalability is 90 percent more efficient and scales up to 32 nodes per site
  • Service startup time and pool scans have been reduced by more than 80 percent
  • Web UI connection and sync speed is faster with various improvements

Security & TLS Enforcement

  • Strong security on the front-end web interface with TLS 1.2 and TLS 1.0 for back-end protocol communication
  • Enhanced alerting and notification capabilities with security and privacy by including SNMPv3
  • Active Directory integration with new security policies that lets storage administrators assign owners to specific group shares

For the full list of changes and update instructions please visit our OSNEXUS support site. We’ll be adding more documentation on the many new QuantaStor 3.15 features soon so “stay tuned” for upcoming blog articles on the new HA ZFS Storage Pool features and details on the new security features.

High Availability QuantaStor 3.15 Security
Replication

Remote Replication and Disaster Recovery with Software Defined Storage

If you haven’t taken the time out to review and identify what parts of your IT infrastructure are housing the critical data, now is the time.  You never know when you’re going to need it and verifying that you have a plan in place to get things back up and running should be a top priority for every business.

Traditional storage systems required that you purchase “custom iron” with extra replication features that would allow your IT team to replicate VMs and databases between sites on a continuous basis throughout the day. With Software Defined Storage like QuantaStor, setting up remote replication policies (Disaster Recovery policies) has never been easier.

The screen shot below (Figure 1) shows how easy it is to set up a remote replication policy with just a few clicks once your appliances are deployed.

Create Replication Schedule

Figure 1

What is Remote Replication

Remote Replication allows the copying of volumes and/or network shares from any QuantaStor storage appliance to any other appliance. It’s a great tool for migrating data between systems and is an ideal component of any Disaster Recovery (DR) strategy.

Remote Replication is done asynchronously meaning that changes to volumes and network shares on one appliance is done up to every hour with calendar-based scheduling or as often as every 15 minutes with timer-based scheduling.  Once a given set of the volumes and/or network shares have been replicated from one appliance to another, the subsequent replication operations send only meta-data changes between appliances. The replication logic is also efficient in that it only replicates actual data written to your volumes/shares and not unused regions of your disk storage which could be vast.

To ensure proper security, all data sent over the network is encrypted. Because only the deltas/changes are sent over the network, replication also works well over limited bandwidth networks. Enabling storage pool compression (available with the default ZFS based Storage Pools) further reduces network load by keeping the data compressed as it is sent over the wire.

Creating a Storage System Link

The first step in setting up Remote Replication is to form a grid of at least two QuantaStor storage appliances (link).  Grids provide a mechanism for managing all your appliances as a single unit across geographies. The QuantaStor grid communication technology connects appliances (nodes) together so that they can share information, coordinate activities such as Remote Replication, and simplify management operations.

After you create the grid you’ll need to setup a Storage System Link between the two or more nodes between which you want to replicate data (volumes and/or shares). The Storage System Link represents a security key exchange between the two nodes so that they can send data between each other using low-level replication mechanisms that work at the storage pool level.

Creation of the Storage System Link is done through the QuantaStor Manager web interface by selecting the ‘Remote Replication’ tab, and then clicking the ‘Create Storage System Link’ button in the tool bar to bring up the dialog box. (Figure 2)

Storage System Link

Figure 2

Select the IP address on each system for the Remote Replication network traffic. If both systems are on the same network then you can simply select one of the IP addresses from one of the local ports but if the remote system is in the cloud or remote location then most likely you will need to specify the external IP address for your QuantaStor system. Note that the two systems communicate over ports 22 and 5151 so you will need to open these ports in your firewall in order for the QuantaStor systems to communicate properly.

Creating a Remote Replica

Once you have a Storage System Link created between at least two systems, you can now replicate volumes and network shares in either direction. Simply log in to the system that you want to replicate volumes from, right click on the volume to be replicated, then choose “Create Remote Replica.”

Creating a remote replica is much like creating a local clone only the data is being copied over to a remote storage system storage pool. As such, when you create a Remote Replica you must specify which storage system you want to replicate too (only systems which have established and online storage system links will be displayed) and which storage pool within that system should be utilized to hold the Remote Replica. If you have already replicated the specified volume to the remote storage system then you can re-sync the remote volume by choosing the Remote Replica association in the web interface and choosing ‘resync’. This can also be done via the ‘Create Remote Replica’ dialog and then choose the option to replicate to an existing target if available.

Creating a Remote Replication Schedule Replication Policy

Remote replication schedules provide a mechanism for replicating the changes to your volumes to a matching checkpoint volume on a remote appliance automatically on a timer or a fixed schedule. This is also commonly referred to as a DR or Disaster Recovery replication policy, but you can use replication schedules for a whole host of use cases.

To create a schedule navigate to the Remote Replication Schedules section after selecting the Remote Replication tab at the top of the screen. Right-click on the section header and choose “Create Replication Schedule.” (Figure 3)

Remote Replication Schedule

Figure 3

Besides selection of the volumes and/or shares to be replicated, you must select the number of snapshot checkpoints to be maintained on the local and remote systems. You can use these snapshots for off-host backup and other data recovery purposes as well so there is no need to have a Snapshot Schedule which would be redundant with the snapshots that will be created by your replication schedule.

If you choose five replicas then up to five snapshot checkpoints will be retained. If, for example, you were replicating nightly at 1 am each day of the week from Monday to Friday then you will have a week’s worth of snapshots as data recovery points. If you are replicating four times each day and need a week of snapshots then you would need 5×4 or a maximum replicas setting of 20.

Summary

With the myriad of server hardware you can deploy QuantaStor SDS onto and the ease with which you can setup a DR / remote replication schedule, you now have the tools to get the business continuity plans (BCP) in place in a reliable and economical way.  If you’re behind on your BCP, make micro goals to start chipping away at it and you’ll have the peace of mind you and your company need in no time.

Disaster Recovery Software Defined Storage
Concrete_wall

Managing Scale-out NAS File Storage with GlusterFS Volumes

QuantaStor provides scale-out NAS capabilities using the traditional CIFS/SMB and NFS as well as via the GlusterFS client protocol for scale-out deployments. For those not familiar with GlusterFS, it’s a scale-out filesystem that ties multiple underlying files systems together across appliances to present them in aggregate as a single filesystem or “single-namespace” as it’s often called.

In QuantaStor appliances, Gluster is layered on top of the QuantaStor Storage Pool architecture enabling the use of QuantaStor appliances for file, block, and scale-out file storage needs all at the same time.

Scale-out NAS using GlusterFS technology is great for unstructured data, archive, and many media use cases. However, due to the current architecture of GlusterFS, it’s not as good for high IOPS workloads such as databases and virtual machines. For those applications, you’ll want to provision block devices (Storage Volumes) or file shares (Network Shares) in the QuantaStor appliances which can deliver the necessary write IOPS performance needed for transactional workloads and VMs.

GlusterFS read/write performance via CIFS/NFS is moderate and can be improved with SSD or hardware RAID controllers with one or more gigabytes of read/write cache as NVRAM.  For deployments accessing scale-out NAS storage from Linux, it is ideal to use the native GlusterFS client as it will enable the performance and bandwidth to increase as you scale-out your QuantaStor grid. For Windows, OS/X and other operating systems you’ll need to use the traditional CIFS/NFS protocols.

QuantaStor Grid Setup

To provision the scale-out NAS shares on QuantaStor appliances, the first step is to create a management Grid by right-clicking on the Storage System icon in the tree stack view in the Web Management User Interface (WUI) and choose “Create Grid.” (Figure 1)

Software Defined Storage Grid Figure 1

After you create the grid you’ll need to add appliances to the grid by right-clicking on the Grid icon and choosing ‘Add Grid Node.’ (Figure 2) Input the node IP address and password for the additional appliances.

Storage Management Grid Node

Figure 2

After the nodes are added you’ll be able to manage them from the QuantaStor tree menu (Figure 3). User accounts across the appliances will be merged with the elected primary/master node in the grid taking precedence.

SDS Storage Grid

Figure 3

Network Setup Procedure

If you plan to use the native GlusterFS client with Linux servers connected directly to QuantaStor nodes then you should setup network bonding to bind multiple network ports on each appliance for additional bandwidth and automatic fail-over.

If you plan to use CIFS/NFS as the primary protocols then you could use either bonding or separate ports into a front-end network for client access and a back-end network for inter-node communication.

Peer Setup

QuantaStor appliance grids allow intercommunication but do not automatically setup GlusterFS peer relationships between appliances automatically. For that you’ll want to select ‘Peer Setup’ and select the IP address on each node to be used for GlusterFS intercommunication. (Figure 4)

GlusterFS Peer Setup

 Figure 4

Peer Setup creates a “hosts” file (/etc/hosts) on each appliance so that each node can refer to the other grid nodes by name and can also be done via DNS. This will ensure that the configuration is kept in sync across nodes and allows the nodes to resolve names even if DNS server access is down.

Gluster volumes span appliances and on each appliance it places a brick. These Gluster bricks are referenced with a brick path that looks much like a URL. By setting up the IP to hostname mappings QuantaStor is able to create brick paths using hostnames rather than IP addresses making it easier to change the IP address of a node.

Finally, in the Peer Setup dialog, there’s a check box to set up the Gluster Peer relationships. The ‘Gluster peer probe’ command links the nodes together so that Gluster volumes can be created across the appliances. Once the peers are attached, you’ll see them appear in the Gluster Peers section of the WUI and you can then begin to provision Gluster Volumes. Alternatively you can add the peers one at a time using the Peer Attach dialog. (Figure 5)

Gluster Peer Attach

Figure 5

Provisioning Gluster Volumes

Gluster Volumes are provisioned from the ‘Gluster Management’ tab in the web user interface. To make a new Gluster Volume simply right-click on the Gluster Volumes section or choose Create Gluster Volume from the tool bar. (Figure 6)

To make a Gluster Volume highly-available with two copies of each file, choose a replica count of two (2). If you only need fault tolerance in case of a disk failure then that is supplied by the storage pools and you can use a replica count of one (1). With a replica count of two (2) you have full read/write access to your scale-out Network Share even if one of the appliance is turned off. With a replica count of (1) you will loose read access to some of your data in the event that one of the appliances is turned off. When the appliance is turned back on it will automatically synchronize with the other nodes to bring itself up to the proper current state via auto-healing.

Gluster Volume

Figure 6

Storage Pool Design Recommendations

When designing large QuantaStor grids using GlusterFS, there are several configurations that we make to ensure maximum reliability and maintainability.

First and foremost, it’s better to create multiple storage pools per appliance to allow for faster rebuilds and filesystem checks than it is to make one large storage pool.  

In the following diagram (Figure 7) we show four appliances with one large 128TB storage pool each. With large amounts of data, it seems like a slightly simpler configuration from a setup perspective. However, there’s a very high cost in the event a GlusterFS brick needs to do a full heal given the large brick size (128TB). To put this in perspective, at 100MB/sec (1GbE) it takes at minimum 2.8 hours to transfer one of TB of data (and that’s assuming theoretical maximum throughput) for a 1GB port or 15 days to repair a 128TB brick.

128TB Gluster Pools

Figure 7

The QuantaStor SDS appliance grid below (Figure 8) shows four storage pools per appliance that are segmented into 32TB. Each of the individual 32TB storage pools can be any RAID layout type but we recommend using hardware RAID5 or RAID50 as that will provide low cost disk fault tolerance within the storage pool and write caching assuming the controller has a CacheVault/MaxCache or other NVRAM type technology. When accounting for a GlusterFS volume configured with two replicas (mirroring) the effective RAID type is RAID5+1 or RAID51.

SDS Appliance Grid

Figure 8

In contrast, we don’t recommend using RAID0 (which has no fault tolerance) as there is a high cost associated with having GlusterFS repair large bricks. In this scenario a full 32TB would need to be copied from the associated mirror brick and that can take a long time as well as utilize considerable disk and network bandwidth. Be sure to use RAID50 or RAID5 for all storage pools so that the disk rebuild process is localized to the hardware controller and doesn’t impact the entire network and other layers of the system.

Smart Brick Layout

When you provision a scale-out network share (GlusterFS Volume) it is also important that the bricks are selected so that no brick mirror-pair has both bricks on the same appliance. This ensures that there’s no single point of failure in HA configurations. QuantaStor automatically ensures correct Gluster brick placement when you select pools and provision a new Gluster Volume by processing the pool list for the new scale-out Gluster Volume using a round-robin technique so that brick mirror-pairs never reside on the same appliance. (Figure 9)

Block Replica Pair

Figure 9

Once provisioned, the scale-out storage is managed as a standard Network Share in the QuantaStor Grid except that it can be accessed from any appliance. Finally, the use of the highly-available interface for NFS/CIFS access in the diagram below (Figure 10). This ensures that the IP address is always available even in the event that an appliance goes offline that is actively serving that virtual interface. If you’re using the GlusterFS native client then you don’t need to think about setting up a HA virtual network interface as the GlusterFS native client communication mechanism is inherently highly available.

HA Interface

Figure 10

Filesystems GlusterFS High Availability NAS