Overview

Many organizations are currently re-evaluating their options when it comes to the virtualization platform they standardize on as Broadcom has been making changes to the VMware licensing options. We’ve seen that many organizations are now making Proxmox their platform of choice given its more flexible licensing options. In this article we’ll dig into how to use QuantaStor Scale-out Storage as the storage platform to back a Proxmox environment.

Before I get started, for those not familiar with all the technologies we’re going to go over in this article today I’ll start with a brief synopsis of each to get everyone up to speed.

Proxmox Virtual Environment (Proxmox VE) is…

an open-source server virtualization platform designed for enterprise use. It integrates Kernel-based Virtual Machine (KVM) for full virtualization and Linux Containers (LXC) for lightweight virtualization, allowing users to manage virtual machines and containers through a single, web-based interface. Proxmox VE supports features such as high availability clustering, software-defined storage, and networking, making it suitable for managing IT infrastructure efficiently. The platform is developed and maintained by Proxmox Server Solutions GmbH and is released under the GNU Affero General Public License.

QuantaStor is…

an enterprise-grade Software Defined Storage platform that turns standard servers into multi-protocol, scale-out storage appliances and delivers file, block, and object storage. OSNexus has made managing enterprise-grade, performant scale-out storage (Ceph-based) extremely simple in QuantaStor, and able to be done in a matter of minutes. The differentiating Storage Grid technology makes storage management a breeze. For the purposes of this exercise we’ll be using QuantaStor’s Scale-out File Storage for file-level Proxmox storage pools (NFS and CephFS) as well as Scale-out Block Storage for block-level Proxmox storage pools (RBD).

At a Glance

Proxmox uses storage in various ways. In the context of this article, we’ll use three different connection types to connect Proxmox to QuantaStor’s Scale-out File and Block Storage. Here’s a breakdown of the types as well as what Proxmox allows them to be used for:

  • NFS: All storage types
  • CephFS: ISO images, Container templates, Backup files
  • RBD: Disk images, Containers

So in a nutshell, NFS and RBD can be used as the disk storage that backs virtual machine and container disk volumes, while CephFS can only be used for media, templates and backup files – not for disk volumes for running systems.

Activity Components

For this Proof of Concept, I’m using virtual machines running on VMware vSphere with a 6-node QuantaStor Storage Grid with each QuantaStor VM containing:

  • vCPUs: 6
  • RAM: 8GB
  • OS: QuantaStor 6.5
  • Networking: Single 10GB connection with simple, combined front-end and back-end configuration
  • Storage:
    • 1 x 100GB SSD for QS OS install/boot
    • 2 x 100GB SSDs for journaling
    • 4 x 10GB SSDs for data

For Proxmox I’m using a bare-metal Dell PowerEdge R730xd server with the following network and storage resources:

  • Network:
    • 4-port, integrated 1GbE NIC
    • Mellanox Technologies MT27520 Family [ConnectX-3 Pro] – 100GbE
  • Storage:
    • 2 x 465.25 GB HDDs in RAID1 – boot
    • 3 x 837.75 GB HDDs – unused
    • 1 x 1.5 TB NVMe – unused

For this install, I’ll use the Mellanox interface and the 465.25 RAID volume.

QuantaStor Scale-Out File Storage Configuration

Note: Many of the images in this post won’t be easily read. Clicking on them will open a full-size image that is much more digestable.

ACTION: Login to the QuantaStor Web UI at this stage to prepare to do further configuration steps.

Note the Tasks pane at the bottom of the page. At the end of each step (after clicking OK) watch the Tasks pane to determine when that task is completed before moving to the next step. In some cases, the initial task will spawn additional tasks and move the initial task down the list. You may need to scroll down in the Tasks pane to determine when all the tasks for a step have been completed. Depending on the size of your window, the Tasks pane may be obscured by the Getting Started dialog or another dialog. In this case you can close the dialog and return to it once the tasks are completed.

Note: Only one Scale-out File Storage filesystem can be created per Scale-out Object Storage cluster.

ACTION: Click on the Getting Started button in the top-right of the main QS UI windows to display the Getting Started dialog.

ACTION: Click Setup Scale-out File Pool in the left pane of the dialog. For the first part of this article we’ll be connecting to a QuantaStor Scale-out File Storage pool from Proxmox with their NFS and CephFS storage types.

(Optional) ACTION: If you’re starting from scratch, you’ll want to go through all of the steps in the Getting Started dialog. If you want some guidance when following them you can refer to parts of two of my previous blogs to catch up:

ACTION: To finalize the configuration we need for this article, click on the Create Share button. I’m naming the share “pve-nfs” and I’m unchecking the “Enable CIFS/SMB Access” box at the bottom of the panel. Then click OK.

ACTION: Click the Storage Management context tab, then select Network Shares from the navigation pane. Here you’ll see the new pve-nfs share we just created.

When mounting your Scale-out File Storage filesystem on your Proxmox node, the Proxmox UI defaults to creating sub-directories at the root (/) of the filesystem. If you want/need more control of the placement of your Proxmox data within the filesystem you’ll need to provision the storage pool at the command line of the Proxmox node. This allows you to not only choose a directory but also allows you to use Ceph Subvolume Groups which offers additional storage-side management and functionality that is beyond the scope of this article. I’ll demonstrate the creation and use at mount of a Subvolume Group but won’t go any further.

In order to make our NFS configuration highly available and fault tolerant we’ll need to add a Virtual IP address that will move from one QuantaStor node to another should the node using it fail.

ACTION: Select the High-availability VIF Management Context Tab at the top of the UI.

ACTION: Click the Create Site Cluster button just below the High-availability VIF Management Context Tab. Select all of the nodes and click OK.

ACTION: In the Navigation Pane, click Site Cluster Virtual Interfaces item, select the name of the cluster you just created and click the Add Cluster VIF under the Context Tabs. Leave Use Case Settings at Storage Pool (Scale-out), select your cluster, set the Config Type to File and click Next.

ACTION: On the Virtual Interface tab, set an available IP address to use, uncheck both iSCSI Portal and NVMeoF Portal, select the interface to use (most likely the interface that has an IP address listed on the same network as the available address you entered) and click OK.

ACTION: This activity will take some time to complete so wait until the Creating Site Cluster Virtual Network Interface task is complete in the Tasks Pane. Once complete, click the Storage Management Context Tab and you’ll see the new VIF listed in the Main Content Pane. Note that it can be on any of the nodes and should that node fail it will be moved to another node to achieve high availability. In my case it resides on sj-643f-24.

Now we need to make configurations and gather some info that isn’t presented in the GUI at the time of this writing. We’re going to need to SSH to one of the QuantaStor nodes and create a user for our CephFS connection. We’ll also mount the filesystem so we can see where data is actually stored.

First, SSH to one of the QuantaStor nodes and change to the root user:

$ ssh qadmin@10.0.18.21
qadmin@10.0.18.21's password: 
Linux sj-643f-21 5.15.0-91-generic #101~20.04.1-Ubuntu SMP Thu Nov 16 14:22:28 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Ubuntu 20.04.6 LTS
OSNEXUS QuantaStor 6.5.0.152+next-e09a6f149b

== System Info ==
Uptime: up 2 weeks, 1 day, 21 hours, 53 minutes
CPU: 6 cores
RAM: 7.75103 GB


 System information as of Thu 19 Dec 2024 05:33:05 PM UTC

  System load:  1.09               Processes:               347
  Usage of /:   22.2% of 56.38GB   Users logged in:         0
  Memory usage: 38%                IPv4 address for ens192: 10.0.18.21
  Swap usage:   0%

Last login: Fri Dec  6 18:59:18 2024 from 10.0.0.1
qadmin@sj-643f-21:~$ sudo -i
[sudo] password for qadmin:
root@sj-643f-21:~#

If you will NOT be implementing Ceph Subvolume Groups (I consider this the default situation) follow this:

root@sj-643f-21:~# USER=proxmox-cephfs1
root@sj-643f-21:~# FS_NAME=qs-soos
root@sj-643f-21:~# ceph auth get-or-create client.$USER \
>   mgr "allow rw" \
>   osd "allow rw tag cephfs metadata=$FS_NAME, allow rw tag cephfs data=$FS_NAME" \
>   mds "allow r fsname=$FS_NAME path=/, allow rws fsname=$FS_NAME path=/" \
>   mon "allow r fsname=$FS_NAME"
[client.proxmox-cephfs1]
	key = AQAhaXRnYEIfCBAAWyDiA0GJXbqDNJ0KekIN8Q==
root@sj-643f-21:~# ceph mon dump
epoch 6
fsid 9d54e688-73b5-1a84-2625-ca94f8956471
last_changed 2024-11-12T20:31:55.872659+0000
created 2024-11-12T20:30:30.905383+0000
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:10.0.18.21:3300/0,v1:10.0.18.21:6789/0] mon.sj-643f-21
1: [v2:10.0.18.24:3300/0,v1:10.0.18.24:6789/0] mon.sj-643f-24
2: [v2:10.0.18.23:3300/0,v1:10.0.18.23:6789/0] mon.sj-643f-23
3: [v2:10.0.18.22:3300/0,v1:10.0.18.22:6789/0] mon.sj-643f-22
4: [v2:10.0.18.25:3300/0,v1:10.0.18.25:6789/0] mon.sj-643f-25
dumped monmap epoch 6
root@sj-643f-21:~# mkdir cephfs
root@sj-643f-21:~# ceph-fuse -m 10.0.18.21:6789 --id admin cephfs
root@sj-643f-21:~# 

If you WILL be implementing Ceph Subvolume Groups follow this:

root@sj-643f-21:~# ceph fs subvolumegroup create qs-soos proxmox
root@sj-643f-21:~# USER=proxmox-cephfs2
root@sj-643f-21:~# FS_NAME=qs-soos
root@sj-643f-21:~# SUB_VOL=proxmox
root@sj-643f-21:~# ceph auth get-or-create client.$USER \
>   mgr "allow rw" \
>   osd "allow rw tag cephfs metadata=$FS_NAME, allow rw tag cephfs data=$FS_NAME" \
>   mds "allow r fsname=$FS_NAME path=/volumes, allow rws fsname=$FS_NAME path=/volumes/$SUB_VOL" \
>   mon "allow r fsname=$FS_NAME"
[client.proxmox-cephfs2]
	key = AQB9aXRn9yzTOBAAh4btyyLkm9eT5nyBVbdnKQ==
root@sj-643f-21:~# ceph mon dump
epoch 6
fsid 9d54e688-73b5-1a84-2625-ca94f8956471
last_changed 2024-11-12T20:31:55.872659+0000
created 2024-11-12T20:30:30.905383+0000
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:10.0.18.21:3300/0,v1:10.0.18.21:6789/0] mon.sj-643f-21
1: [v2:10.0.18.24:3300/0,v1:10.0.18.24:6789/0] mon.sj-643f-24
2: [v2:10.0.18.23:3300/0,v1:10.0.18.23:6789/0] mon.sj-643f-23
3: [v2:10.0.18.22:3300/0,v1:10.0.18.22:6789/0] mon.sj-643f-22
4: [v2:10.0.18.25:3300/0,v1:10.0.18.25:6789/0] mon.sj-643f-25
dumped monmap epoch 6
root@sj-643f-21:~# mkdir cephfs
root@sj-643f-21:~# ceph-fuse -m 10.0.18.21:6789 --id admin cephfs
root@sj-643f-21:~# 

Regardless of which of the two methods you chose to employ, you created a user called either proxmox-cephfs1 or proxmox-cephfs2 and the output included the secret key you’re going to need to connect from Proxmox. COPY the key value and put it somewhere you can refer back to when we make the connection. If you ever need to look it up again you can SSH to one of the QuantaStor nodes and as root issue “ceph auth get client.proxmox-cephfs[1|2]”. In addition, COPY the IP addresses of the monitor nodes from the “copy mon dump” command.

That’s all we need to accomplish to be able to make a storage connection from Proxmox using the NFS and CephFS storage types, so let’s charge forward to configuring Proxmox.

Proxmox Installation

Proxmox installation is a relatively simple process. The first thing to do is boot to the installer. There are tons of resources online about how to create a bootable USB thumb drive or connect to virtual media with iDRAC (Dell) or iLO (HPE), so I’m not going over that here. In a nutshell, connect your media to the system you’ll be installing on (you can even install into a VM on VMware vSphere), enter the boot manager and select the media to start the install.

ACTION: The Welcome screen appears. Click Install Proxmox VE (Graphical) to continue.

ACTION: Read through the EULA and click “I agree” to continue.

ACTION: Select the drive you want to use as your boot drive and click Next.

ACTION: Enter your location information and select your Time zone and Keyboard Layout, then click Next.

ACTION: Enter a password for the root user and enter your email. Click Next.

ACTION: Select the network interface you want to use, enter the hostname and network details, then click Next.

ACTION: Check “Automatically reboot after successful installation” and click Install.

And that’s all there is to it… Time to move on to the Proxmox web UI.

ACTION: In a web browser, navigate to the URL listed in the Proxmox console. As is typical of most software platforms, you’ll be met by an untrusted, self-signed cert. Just power through it – Advanced, Proceed!

ACTION: Then comes the login screen. Use “root” for the username, the password you entered during the install and select “Linux PAM standard authentication” for the Realm.

ACTION: Click OK at the No valid subscription dialog, and then you’re at the main Proxmox web UI.

Note: Unless you purchase a subscription you’re likely to get “Update package database” errors in your Proxmox Tasks pane. This is due to the apt package manager not being able to access the Proxmox Enterprise repositories. You can safely ignore these as you can still manage the system without it.

So far so good. Now we can get to the good stuff!

Proxmox NFS Configuration

(Optional) ACTION: If you’re not at the Proxmox web UI, open a browser and navigate to it: https://<fqdn | ip>:8006

If you’ve been exploring the interface, let’s bring you back to where you need to be.

ACTION: Select Datacenter in the left-pane and Storage in the right-pane.

ACTION: Click the Add button at the top of the storage pool listing in the right-pane. This brings up a list of the types of storage you can provision with Proxmox. Click NFS.

ACTION: In the Add: NFS dialog you’ll enter the following information and click the Add button:

  • ID: qs-nfs
    • This is the name you want to use to identify your QuantaStor NFS storage from the Proxmox node.
  • Server: 10.0.18.20
    • This is the IP address or FQDN of your QuantaStor storage. In order to be highly available, I’m adding the VIF that we created when configuring QuantaStor. This way, if the node that holds the VIF fails it will automatically be failed over to a surviving node thereby offering high availability.
  • Export: /pve-nfs
    • This is the NFS export path from QuantaStor. This can be found on the QuantaStor side by right-clicking the pve-nfs Network Share we created earlier, selecting Properties and finding the Export Path item. Prefix it with “/”.
  • Content: Disk image, Container
    • This is a multi-select list where you can select what kind of content is allowed to be stored in the datastore. For NFS you can store all of the content types, but we’ll limit our use to Disk images and Container content.

That’s all there is to add an NFS storage pool to Proxmox.

We’ll test the use of the pool after we add a CephFS storage pool, but in the mean time, let’s take a look at what happened at the storage layer. If you recall, when we made the final configurations to QuantaStor via SSH we mounted the Scale-out File Storage to a directory called “cephfs” in root’s home directory (/root/cephfs). Let’s get a listing of the contents of the pve-nfs share:

root@sj-643f-21:~# ls cephfs/pve-nfs
images  template
root@sj-643f-21:~# ls cephfs/pve-nfs/template/
cache
root@sj-643f-21:~# 

This output shows two directories, images and template and over time more directories may be created. Because we told Proxmox to only allow virtual disks and containers to use this pool, the only directory that matters is the images directory. Proxmox will store VM virtual disks and container content in ID-named directories.

Proxmox CephFS Configuration

Picking up where we left off, now we’ll add a CephFS storage pool. As mentioned earlier, it’s possible to direct Proxmox to use sub-directories or Ceph SubVolume Groups. We’ll first go through the activity of adding a CephFS storage pool WITHOUT sub-directories or Ceph SubVolume Groups. This activity will occur in the normal Proxmox Web UI.

ACTION: Click the Add button at the top of the storage pool listing in the right-pane. This brings up a list of the types of storage you can provision with Proxmox. Click CephFS.

ACTION: In the Add: CephFS dialog you’ll enter the following information and click the Add button:

  • ID: qs-cephfs1
    • This is the name you want to use to identify your QuantaStor Scale-out File Storage from the Proxmox node.
  • Monitors: 10.0.18.21,10.0.18.22,10.0.18.23,10.0.18.24,10.0.18.25
    • This is a comma separated list of monitor IP addresses that we gathered at the end of the QuantaStor configuration activity.
  • User name: proxmox-cephfs1
    • This is the ceph user we created on QuantaStor for use WITHOUT sub-directories or Ceph SubVolume Groups
  • FS Name: qs-soos
    • The name of the QuantaStor Scale-Out File Storage pool
  • Secret Key: AQAhaXRnYEIfCBAAWyDiA0GJXbqDNJ0KekIN8Q==
    • This is the secret key we gathered at the end of the QuantaStor configuration activity.
  • Content: VZDump backup file, ISO image, Container template
    • This is a multi-select list where you can select what kind of content is allowed to be stored in the datastore. For CephFS you are limited to content that is NOT used to run VMs or containers. So in essence, this is used for templates, ISOs and backups.

That’s all there is to adding a CephFS storage pool to Proxmox.

Let’s look at what happened to the storage layer via the SSH session to the QuantaStor server:

root@sj-643f-21:~# ls cephfs/
dump  pve-nfs  pve-nfs.sharemeta  template
root@sj-643f-21:~# ls cephfs/template/
cache  iso
root@sj-643f-21:~# ls cephfs/dump
root@sj-643f-21:~# 

This output shows that the dump and template directories were created. Note that we’re looking one level above where we looked at the NFS content. Dump is where backup files will be created, the template/cache directory is where container templates will be stored and the template/iso directory is where ISO images will be stored. To be clear, this is when you DON’T use sub-directories or Ceph SubVolume Groups.

Now let’s create a second CephFS storage pool WITH Ceph SubVolume Groups. If you recall, in the second code set at the end of the QuantaStor configuration activity where we created ceph users issued a command that created a Ceph SubVolume group called “proxmox”, so we’ll use that. You’ll need your secret key for the proxmox-cephfs2 ceph user and the monitor addresses we gathered earlier. This is all done at the command line, so establish an SSH session to your Proxmox node:

$ ssh root@10.0.44.23
root@10.0.44.23's password: 
Linux pve 6.8.4-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.4-2 (2024-04-10T17:36Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Mon Dec 30 17:09:44 2024 from 10.0.0.1
root@pve:~# echo AQB9aXRn9yzTOBAAh4btyyLkm9eT5nyBVbdnKQ== > proxmox-cephfs2.secret
root@pve:~# pvesm add cephfs qs-cephfs2 \
        --monhost 10.0.18.21,10.0.18.22,10.0.18.23,10.0.18.24,10.0.18.25 \
        --content vztmpl,iso,backup \
        --keyring proxmox-cephfs2.secret \
        --subdir /volumes/proxmox \
        --username proxmox-cephfs2
root@pve:~# 

If you go back to the Proxmox Web UI you’ll see the new qs-cephfs2 storage pool.
(Note: if you were already on the Datacenter:Storage page you may need to navigate away from it and back to it for it to refresh.)

Let’s again look at what happened to the storage layer via the SSH session to the QuantaStor server. With Ceph SubVolume Groups, there is a “volumes” directory off of the root of the filesystem that contains the “proxmox” SubVolume Group, and within that are the directories we’re wanting to see.

root@sj-643f-21:~# ls cephfs
pve-nfs  pve-nfs.sharemeta  volumes  volumes_snaps
root@sj-643f-21:~# ls cephfs/volumes/proxmox/
dump  template
root@sj-643f-21:~# ls cephfs/volumes/proxmox/template
cache  iso
root@sj-643f-21:~# ls cephfs/volumes/proxmox/dump
root@sj-643f-21:~# 

Proxmox Scale-out File Storage Testing

Now that we’ve created storage pools using CephFS for templates and ISOs and using NFS for storage for VMs and containers, let’s test out our configuration. First, let’s upload an ISO and then download a container template.

ACTION: In the Proxmox Web UI, expand the pve node under Datacenter in the tree on the left, then select one of the qs-cephfs* storage pools that you created, then in the main pane select ISO Images.

ACTION: Click the Upload button, click Select File to select an ISO image from your local machine, then click Upload. I’m using an Ubuntu Budgie 24.04 desktop image.

ACTION: Once the dialog shows “finished file import successfully” and “TASK OK”, close the dialog.

You can now see that the ubuntu-budgie-24.04.1… ISO file exists on the QuantaStor Scale-out File Storage.

ACTION: Now let’s download a container template. Still on the Datacenter/pve/qs-cephfs* item in the tree on the left, select CT Templates.

ACTION: Click the Templates button at the top of the main pane, select ubuntu-22.04-standard and then click Download.
Note: At the time of this writing Proxmox has a bug that does not allow Ubuntu 24.04 to run, so I’m going back a version.

ACTION: Once the dialog shows “TASK OK”, close the dialog.

You can now see that the ubuntu-22.04-standard_22.04-1_amd64.tar.zst container template exists on the QuantaStor Scale-out File Storage.

Now we can use the ISO and container template to instantiate VMs and LXC containers. We’ll start with a VM and then create a container.

ACTION: In the top-right of the Web UI, click Create VM. Enter the required data on each tab to create the VM. I’ll only be providing values that are not defaults and won’t be adding images if all values are default on a page. Click Next after completing each tab.

  • General:
    • Name: UbuntuBudgie24.04 (spaces not allowed)
  • OS:
    • Storage: qs-cephfs* (dropdown, whichever one you used)
    • ISO image: ubuntu-budgie-24.04.1-desktop-amd64.iso (dropdown)
  • System: <defaults>
  • Disks:
    • Storage: qs-nfs
  • CPU:
    • Sockets: 2
    • Cores: 2
  • Memory: 8192
  • Network: <defaults>
  • Confirm:
    • Start after created: Checked

ACTION: Click Finish. In the Tasks pane you’ll see status of the creation and power on. In the left pane under pve, select 100 (UbuntuBudgie24.04), then click Console in the right pane. After the boot process you’ll see the typical Ubuntu install screen. I will be going through the install, but documenting it here is beyond the scope of this article so I will leave that for you to do on your own.

ACTION: Now we’ll test out the LXC side of things. In the top-right of the Web UI, click Create CT. Enter the required data on each tab to create the CT. I’ll only be providing values that are not defaults. Click Next after completing each tab.

  • General:
    • Hostname: ubuntu22.04 (no spaces allowed in Linux hostnames)
    • Password: <your password>
    • Confirm password: <your password>
  • Template:
    • Storage: qs-cephfs*
    • Template: ubuntu-22.04-standard_22.04-1_amd64.tar.zst
  • Disks:
    • Storage: qs-nfs
  • CPU:
    • Cores: 2
  • Memory:
    • Memory (MiB): 1024
  • Network:
    • IPv4: DHCP
  • DNS: <defaults>
  • Confirm: <defaults>

ACTION: Click Finish. In the Task viewer dialog you’ll see status of the creation of the container. Once you see the “TASK OK” output you can close the dialog.

ACTION: In the left pane under pve, select 101 (ubuntu22.04), click Console in the right pane, then click the Start button above the console and under the Documentation button. After a few seconds you’ll see a typical Linux terminal login where you can login as root with the password you provided when creating the container.

And… We’re done with Scale-Out File Storage. We’ve pushed an ISO image and container template to QuantaStor Scale-Out File Storage via the CephFS connection, and we’ve successfully created a virtual machine and a container that are backed by QuantaStor Scale-Out File Storage via the NFS connection.

QuantaStor Scale-Out Block Storage Configuration

Now we’re going to switch from Scale-out File Storage to Scale-out Block Storage. For this we need to go back to the QuantaStor Web UI and do some additional configuration.

ACTION: From the QuantaStor Web UI, click on the Getting Started button in the top-right of the main QS UI windows to display the Getting Started dialog, then click on Setup Scale-out Block Pool from the left pane of the dialog.

ACTION: In the right pane of the dialog, click the Create Pool button. Change the name to “proxmox”, leave the Type set to Replica-3, change the Scaling Factor to 25%, then click OK.

ACTION: That action will take a minute to complete. Once you see in the Tasks Pane that the Create Scale-out Storage Pool task is complete, in the Navigation Pane click on Scale-out Storage Pools and you’ll see your new “proxmox” storage pool.

ACTION: To create our RBD connection from Proxmox we’re going to need to know the QuantaStor internal name of the RBD pool we just created. In the Navigation Pane, right-click on the Proxmox pool we just created and select Properties.

The third value in the list is Internal ID. COPY that value, prefix it with “qs-” and put it somewhere you can reference it when we configure the Proxmox side. In my case, the full value is qs-30b849e6-c1c4-8dd5-09c1-2f7174099e31.

Now we need to go to the QuantaStor command line to finish the config.

# First let's validate our pool name
root@sj-643f-21:~# ceph osd pool ls
.mgr
qs-soos_metadata
qs-soos_data
qs-30b849e6-c1c4-8dd5-09c1-2f7174099e31
root@sj-643f-21:~# USER=proxmox-rbd
# Use the pool name above or that you copied
root@sj-643f-21:~# POOL=qs-30b849e6-c1c4-8dd5-09c1-2f7174099e31
root@sj-643f-21:~# ceph auth get-or-create client.$USER \
>   mgr "profile rbd pool=$POOL" \
>   osd "profile rbd pool=$POOL" \
>   mon "profile rbd"
[client.proxmox-rbd]
	key = AQB0JHdnsZpNJBAAzBS+IVJDP1FcWl/Lo0dNYQ==
root@sj-643f-21:~# ceph auth get client.proxmox-rbd > ./client.proxmox-rbd.keyring
root@sj-643f-21:~# cat client.proxmox-rbd.keyring 
[client.proxmox-rbd]
	key = AQB0JHdnsZpNJBAAzBS+IVJDP1FcWl/Lo0dNYQ==
	caps mgr = "profile rbd pool=qs-30b849e6-c1c4-8dd5-09c1-2f7174099e31"
	caps mon = "profile rbd"
	caps osd = "profile rbd pool=qs-30b849e6-c1c4-8dd5-09c1-2f7174099e31"
root@sj-643f-21:~# ceph mon dump
epoch 6
fsid 9d54e688-73b5-1a84-2625-ca94f8956471
last_changed 2024-11-12T20:31:55.872659+0000
created 2024-11-12T20:30:30.905383+0000
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:10.0.18.21:3300/0,v1:10.0.18.21:6789/0] mon.sj-643f-21
1: [v2:10.0.18.24:3300/0,v1:10.0.18.24:6789/0] mon.sj-643f-24
2: [v2:10.0.18.23:3300/0,v1:10.0.18.23:6789/0] mon.sj-643f-23
3: [v2:10.0.18.22:3300/0,v1:10.0.18.22:6789/0] mon.sj-643f-22
4: [v2:10.0.18.25:3300/0,v1:10.0.18.25:6789/0] mon.sj-643f-25
dumped monmap epoch 6
root@sj-643f-21:~# 

As you did with the CephFS user, you’re going to need to COPY the monitor IP addresses if you don’t have them from before. For the RBD connection we need the whole keyring rather than just the secret like we used for CephFS. You can COPY the output of the “cat client.proxmox-rbd.keyring” command or SCP the file – whatever you want to do. In the end, you’ll need to copy that data into the Proxmox interface, so whatever is most comfortable for you.

That’s all we need to do on the storage side. Now we can move on to using the new pool in Proxmox.

Proxmox RBD Configuration

Now we’ll add a RBD storage pool to Proxmox. This activity will occur in the normal Proxmox Web UI.

ACTION: Click the Add button at the top of the storage pool listing in the right-pane. This brings up a list of the types of storage you can provision with Proxmox. Click RBD.

ACTION: In the Add: RBD dialog you’ll enter the following information and click the Add button:

  • ID: qs-rbd
    • This is the name you want to use to identify your QuantaStor Scale-out Block Storage from the Proxmox node.
  • Pool: qs-30b849e6-c1c4-8dd5-09c1-2f7174099e31
    • This is the Internal ID we looked up in the QuantaStor interface, prefixed by “qs-” and the value we found at the command line with the “ceph osd pool ls” command.
  • Monitors: 10.0.18.21,10.0.18.22,10.0.18.23,10.0.18.24,10.0.18.25
    • This is a comma separated list of monitor IP addresses that we gathered at the end of the QuantaStor configuration activity.
  • User name: proxmox-rbd
  • Keyring: [client.proxmox-rbd]
    key = AQB0JHdnsZpNJBAAzBS+IVJDP1FcWl/Lo0dNYQ==
    caps mgr = “profile rbd pool=qs-30b849e6-c1c4-8dd5-09c1-2f7174099e31”
    caps mon = “profile rbd”
    caps osd = “profile rbd pool=qs-30b849e6-c1c4-8dd5-09c1-2f7174099e31”
    • This is the full content of the client.proxmox-rbd.keyring file we created at the command line of the QuantaStor server.
  • Content: Disk image, Container
    • For RBD you are limited to content that IS used to run VMs or containers, so Disk image and Container types.

That’s all that’s required for configuring the RBD connection from Proxmox to QuantaStor Scale-out Block Storage. Now, you can repeat the virtual machine and container creation steps that we used in the Proxmox Scale-out File Storage Testing section but instead of using the qs-nfs pool for the storage use the qs-rbd pool.

For completeness, there’s one more thing I want to cover, and that is how to connect to an RBD pool that uses Erasure Coding instead of Replicas. I’m not going to do images for this one as it’s mostly redundant to what we’ve already covered, but will provide the details.

First, you’ll need to create an RBD pool that uses Erasure Coding. If you follow the RBD pool creation steps above but select Erasure for the pool type instead of Replica you’ll be good. Here are the details I tested with:

  • Name: proxmox2
  • Type: Erasure
  • Data Chunks (K): 4
  • Code Chunks (M): 2
  • Scaling Factor: 25%

Remember to get the Internal ID from the properties of the proxmox2 pool after creation. IN ADDITION, get the Internal ID of the proxmox2_ec_data pool too. Then you’ll need to create another Ceph user using the proxmox2 details. Here’s what I did on the QuantaStor node:

root@sj-643f-21:~# USER=proxmox-rbd2
root@sj-643f-21:~# POOL=qs-5d6bb7f1-2f47-68d1-21eb-78e0d29cc402
root@sj-643f-21:~# EC_POOL=qs-3e505ad0-be8e-c4eb-8117-19f3b2c89049
root@sj-643f-21:~# ceph auth get-or-create client.$USER \
>   mgr "profile rbd pool=$POOL, profile rbd pool=$EC_POOL" \
>   osd "profile rbd pool=$POOL, profile rbd pool=$EC_POOL" \
>   mon "profile rbd"
[client.proxmox-rbd2]
	key = AQA9Onhn9KUMMRAAwouoLrTrIADc8Pq8nShwQw==
root@sj-643f-21:~# ceph auth get client.proxmox-rbd2
[client.proxmox-rbd2]
	key = AQA9Onhn9KUMMRAAwouoLrTrIADc8Pq8nShwQw==
	caps mgr = "profile rbd pool=qs-5d6bb7f1-2f47-68d1-21eb-78e0d29cc402, profile rbd pool=qs-3e505ad0-be8e-c4eb-8117-19f3b2c89049"
	caps mon = "profile rbd"
	caps osd = "profile rbd pool=qs-5d6bb7f1-2f47-68d1-21eb-78e0d29cc402, profile rbd pool=qs-3e505ad0-be8e-c4eb-8117-19f3b2c89049"
root@sj-643f-21:~# 

Now here’s the trick… On the Proxmox side you CAN’T use the GUI to create the pool but rather need to use the command line. You’ll need to create a file called client.proxmox-rbd2.keyring and add the data from the “ceph auth get client.proxmox-rbd2” command that we just ran on the QuantaStor node. The magical addition that lets you use the Erasure Coded pool is the “–data-pool” parameter of the pvesm command below. Here’s what you do:

root@pve:~# pvesm add rbd qs-rbd2 \
        --monhost 10.0.18.21,10.0.18.22,10.0.18.23,10.0.18.24,10.0.18.25 \
        --content images,rootdir \
        --keyring ./client.proxmox-rbd2.keyring \
        --pool qs-5d6bb7f1-2f47-68d1-21eb-78e0d29cc402 \
        --data-pool qs-3e505ad0-be8e-c4eb-8117-19f3b2c89049 \
        --username proxmox-rbd2
root@pve:~# pvesm status
Name              Type     Status           Total            Used       Available        %
local              dir     active        98497780         2844188        90604044    2.89%
local-lvm      lvmthin     active       353746944               0       353746944    0.00%
qs-cephfs1      cephfs     active        69062656        13664256        55398400   19.79%
qs-cephfs2      cephfs     active        69062656        13664256        55398400   19.79%
qs-nfs             nfs     active        69062656        13664256        55398400   19.79%
qs-rbd             rbd     active        64518863         9116891        55401972   14.13%
qs-rbd2            rbd     active       110803960              16       110803944    0.00%
root@pve:~# rbd -m 10.0.18.21 --keyring ./client.proxmox-rbd2.keyring -p qs-5d6bb7f1-2f47-68d1-21eb-78e0d29cc402 --id proxmox-rbd2 ls
did not load config file, using default settings.
2025-01-03T13:33:06.881-0600 75f7a9327500 -1 Errors while parsing config file!

2025-01-03T13:33:06.881-0600 75f7a9327500 -1 can't open ceph.conf: (2) No such file or directory

2025-01-03T13:33:06.881-0600 75f7a9327500 -1 Errors while parsing config file!

2025-01-03T13:33:06.881-0600 75f7a9327500 -1 can't open ceph.conf: (2) No such file or directory

vm-104-disk-1
vm-104-disk-0
root@pve:~# 

Note: You’re NOT going to be able to see your virtual disks and container filesystems in the Proxmox GUI. When you click on the qs-rbd2 pool/VM Disks or qs-rbd2 pool/CT Volumes you’ll get an error: “rbd error: rbd: listing images failed: (2) No such file or directory (500)”. From the command line output above, you can see the virtual disks by using the rbd command from the Proxmox command line. We’re specifically NOT using a ceph.conf file for this activity, so you can safely ignore those errors at the top of the output.

With this configuration I was successful in creating both a VM and a container that were backed by the qs-rbd2 (erasure coding) pool.

Recap

Congratulations!! You’re now an expert at using QuantaStor Scale-out File and Block Storage with Proxmox. We’ve accomplished the following:

  • QuantaStor
    • Created a Scale-out File Storage pool
      • Optionally, added a Ceph SubVolume Group
      • Created an NFS share
      • Added at least one user for CephFS access
    • Created at least one Scale-out Block Storage pool
      • Created at least one user for RBD access
  • Proxmox
    • Created an NFS storage pool for storing virtual disk and container root volumes
    • Created at least one CephFS storage pool for storing ISO images and container templates
    • Created at least one RBD storage pool for storing virtual disk and container root volumes
    • Downloaded a container template to the CephFS pool
    • Uploaded an ISO image to the CephFS pool
    • Created a VM using the NFS pool storage with an ISO image from the CephFS pool
    • Created a container using the NFS pool storage with a template from the CephFS pool
    • Created a VM using the RBD (replica) pool storage with an ISO image from the CephFS pool
    • Created a container using the RBD (replica) pool storage with a template from the CephFS pool
    • Created a VM using the RBD (erasure) pool storage with an ISO image from the CephFS pool
    • Created a container using the RBD (erasure) pool storage with a template from the CephFS pool

Final Thoughts

Man, that’s a lot!! I hope you found value in this article and it helps you implement QuantaStor storage with Proxmox. I’d love to hear from you – the good, the bad and the ugly. In addition, if you have ideas of other integrations with any of the features of QuantaStor please send them my way either via the comments below or via email at steve.jordahl (at) osnexus.com!

Resources

Podcast also available on PocketCasts, SoundCloud, Spotify, Google Podcasts, Apple Podcasts, and RSS.

Leave a comment