Enabling Multi-tenant SDS with QoS Controls

Multi-tenant environments work efficiently because computing and storage resources are pooled and shared across a large number of users or “tenants.”  In an enterprise deployment these tenants might be from different departments or from various groups within a department like engineering or marketing. In a managed hosting environment the tenants will be separate companies or customers.

On the compute side, this is pretty simple because the tenant’s virtual machine is limited by the size and number of virtual cores and memory. But on the storage side, the situation is more complex.

Traditional storage systems only have one speed to deliver the storage read/write request as fast as possible. This works great when the appliance is dedicated to a particular task or customer, but it falls down in multi-tenant environments because a single user can use up all of the available bandwidth or IOPS thereby negatively impacting the performance of all users of the storage appliance. In some cases, the performance drop can be dramatic to the point of making the storage appliance unusable for all other users.

In all of these scenarios, the key technology needed to make the resources sharable is called Quality of Service or “QoS” controls. When QuantaStor appliances are in a multi-tenancy environment it is important to be able to limit the maximum read and write bandwidth allowed for specific storage volumes so that a given user or application cannot unfairly consume an excessive amount of the storage appliance’s available bandwidth. The bandwidth limiting feature is thus referred to as Quality of Service (QoS) controls that limit the maximum throughput for reads and writes to ensure a reliable and predictable “QoS” for all applications and users of a given appliance.

QuantaStor 3.16 now has QoS controls built-in making it the ideal SDS platform for hosting and enterprise environments that require QoS services (Figure 1).

Figure 1

QoS controls can only be applied to Storage Volumes (not Network Shares) and must be in a ZFS or Ceph-based Storage Pool. Once QoS controls on a given Storage Volume are applied the settings will be visible in the main center table (Figure 2).

qoscontrols

Figure 2

QoS Policies

QoS policies can be created that allow for changing the QoS settings across an appliance grid with a single command. In some cases, storage admins may want to assign Storage Volumes to a QoS level by policy. This makes it easy to adjust the QoS for all Storage Volumes using a given QoS policy quickly and easily. To create a QoS Policy using the QuantaStor CLI run the following command.

qs qos-policy-create high-performance --bw-read=300MB --bw-write=300MB

Policies can also be modified at any time with the changes being immediately applied to all Storage Volumes in the grid associated with the policy.

qs qos-policy-create high-performance --bw-read=400MB --bw-write=400MB

With this command, performance limits can be dynamically changed to increase or decrease the maximums for certain hours of the day where storage IO loads are expected to be lower or higher by using a script or cron job.

Please note, if you set specific non-policy QoS settings for a Storage Volume these will override remove any QoS policy setting associated with the Storage Volume. The reverse is also true, if you have a specific QoS setting for a Storage Volume (eg: 200MB/sec reads, 100MB/sec writes) and then apply a QoS policy to the volume as the limits set in the policy will override the Storage Volume specific settings.



Categories: QuantaStor 3.16, Software Defined Storage

Tags:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: