The latest maintenance release of QuantaStor SDS (v3.14) was published on December 30th, 2014 and comes with several new features. Some highlights include:
- Cascading replication of volumes and shares allows for replicating data in an unlimited chain-linked fashion from appliance to appliance to appliance.
- Kernel upgrade to the Linux 3.13 that adds support for the latest 12GB SAS/SATA HBA and RAID controllers as well as the latest 40GbE network interface cards.
- Advanced universal hot-spare management to the ZFS-based storage pool type that’s enclosure aware and makes hot-spares universally shared within an appliance and across multiple appliances.
This is also the first release that has some initial Ceph support but at this time we’re only working with partners via a pilot program around the new Ceph capabilities. For more information about the pilot program please contact us here and note that broad GA availability of Ceph support is planned for late Q1 2015.
Below is the full list of changes. Linux kernel update instructions can be found on the OSNEXUS support site.
Change Log:
- SO DVD image: osn_quantastor_v3.14.0.6993.iso
- MD5 Hash: osn_quantastor_v3.14.0.6993.md5
- adds 3.13 linux kernel and SCST driver stack upgrade
- adds support for Micron PCIe SSD cards
- adds universal hot-spare management system for ZFS based pools
- adds support for FC session management and session iostats collection
- adds disk search/filtering to Storage Pool Create/Grow dialogs in web interface
- adds configurable replication schedule start offset to replication schedule create/modify dialogs
- adds support for cascading replication schedules so that you can replicate volumes across appliances A->B->C->D->etc
- adds wiki documentation for CopperEgg
- adds significantly more stats/instruments to Librato Metrics integration
- adds dual mode FC support where FC ports can now be in Target+Initiator mode
- adds support for management API connection session management to CLI and REST API interfaces
- adds storage volume instant rollback dialog to web management interface
- adds sysstats to send logs report
- adds swap device utilization monitoring and alerting on high swap utilization
- adds support for unlimited users / removes user count limit license checks for all license editions
- adds support for scale-out block storage via Ceph FS/RBDs (pilot program only)
- fix for CLI host-modify command
- fix for pool discovery reverting IO profile selection back to default at pool start
- fix for web interface to hide ‘Delete Unit’ for units used for system/boot
- fix for alert threshold slider setting in web interface ‘Alert Manager’ dialog
- fix to accelerate pool start/stop operations for FC based systems
- fix to disk/pool correlation logic
- fix to allow IO profiles to have spaces and other special characters in the profile name
- fix to FC ACL removal
- fix to storage system link setup to use management network IPs
- fix to remove replication association dialog to greatly simplify it
- fix to CLI disk and pool operations to allow referencing disks by short names
- fix for replication schedule create to fixup and validate storage system links
- fix for replication schedule delta snapshot cleanup logic which ensures that the last delta between source and target is not removed
- fix for stop replication to support terminating zfs based replication jobs
- fix for pool freespace detection and alert management
- fix license checks to support sum of vol, snap, cloud limits across all grid nodes
- fix to create gluster volume to use round-robin brick allocation across grid nodes/appliances to ensure brick pairs do not land on the same node
- fix to storage volume snapshot space utilization calculation
- fix to iSCSI close session logic for when multiple sessions are created between the same pair of target/initiator IP addresses
- fix to auto update user specific CHAP settings across all grid nodes when modified
- fix to allow udev more time to generate block device links, resolves issue exposed during high load with replication
- fix to IO fencing logic to reduce load and make it work better with udev
Categories: Storage Appliance Hardware
Leave a Reply