Overview

Secure, high-performance data transfer is a critical requirement for modern research and enterprise workflows. Today, we’re going to look at how to integrate Globus Connect Server v5 directly into QuantaStor as a containerized service.

By running Globus directly on your QuantaStor storage nodes, you enable your users to move data in and out of your storage grid at wire speed, leveraging the powerful Globus content distribution network.

What is Globus and Globus Connect Server?

Globus is a non-profit service from the University of Chicago that enables secure, reliable research data management. It allows you to move, share, and discover data via a single interface, whether your files are on a laptop, a supercomputer, or a QuantaStor storage appliance.

Globus Connect Server (GCS) allows you to add your lab cluster, campus research computing system, or other multiuser HPC facility to the Globus ecosystem.

What is OSNexus QuantaStor?

OSNexus QuantaStor is an enterprise-grade Software Defined Storage platform that turns standard servers into multi-protocol, scale-up & scale-out storage appliances that deliver file, block, and object storage. OSNexus has made managing enterprise-grade, performant scale-out storage (Ceph-based) and scale-up storage (ZFS-based) extremely simple in QuantaStor, and able to be done quickly from anywhere. The differentiating Storage Grid technology makes storage management a breeze.

Let’s Go! The Deployment Guide!

In this article we’ll be deploying Globus Connect Server directly on a single QuantaStor Scale-up Storage node. We’ll integrate it with QuantaStor using a new feature called Service Containers where we associate a Docker container with a storage object such that their lifecycles are intertwined. In this case, we’re tying it to a network share. In a bigger multi-node QuantaStor deployment this would allow the docker container to follow the network share (and storage pool) as it moves from one node to another during a failover activity, thereby adding fault tolerance to a service that relies heavily on your storage infrastructure.

Note: Identity management is a very deep topic with Globus. If you think about it, there will be an account that you log into the Globus platform with to access data, and that identity needs to be mapped to an account on the system containing data such that you can actually retrieve the data. The identity management topic is way beyond the scope of this article, so we’re going to keep things simple. I’ll point out a couple identity aspects along the way, but don’t expect a deep dive.

Agenda

  • Activity Components
  • Check Prerequisites
  • Create a storage pool
  • Create a network share
  • Create a Docker ipvlan Network
  • Create a Globus ID
  • Create a Globus Endpoint
  • Create a Globus Storage Gateway and Collection
  • Test the container
  • Create a resource group
  • Validate Globus connectivity

Activity Components

For this article, I’m using a single QuantaStor virtual machines running on VMware vSphere with the following characteristics:

  • vCPUs: 6
  • RAM: 8GB
  • OS: Latest version of QuantaStor
  • Networking: Two 10GB connections, but only using one for all network functions
  • Storage:
    • 1 x 100GB SSD for QS OS install/boot
    • 2 x 100GB SSDs for journaling
    • 4 x 10GB SSDs for data

The node isn’t configured past system-level configuration, so no storage is configured.

Check Prerequisites

Globus Connect Server has some prerequisites that we need to accommodate. Most are very standard, such as using a supported Linux distribution, syncing with a time service, etc. The most notable though is the requirement that the node be accessible directly from the Internet, meaning that this node requires a public internet address. Globus Connect Server does support NAT, so you can create a static NAT from your firewall to an internal address, which is what we’ll be doing here.

In addition, another requirement is that Globus Connect Server requires that ports 443 and 50000 – 51000 be open and accessible from the Internet – that’s 1001 ports! In a Docker context, that would normally mean one of two things, either mapping 1001 ports from a host interface into the Docker network context or using Docker host networking mode which allows the container full control of the host network config. Using the port mapping approach adds over a minute to the startup and shutdown times for the container while Docker adds or removes iptables rules – overhead! Using the host networking approach isn’t ideal for this activity since we have an active storage platform that is already using port 443. So we’re going to do something a bit different and use a different Docker network driver that alleviates these problems called ipvlan.

Before moving forward, make sure:

  • You have a PUBLIC IP ADDRESS allocated to this activity
  • You have a PUBLIC DNS ENTRY allocated to this activity
  • A STATIC NAT from the public IP address to AN INTERNAL ADDRESS that’s ON THE SAME NETWORK AS THE QUANTASTOR NODE

With those things in place, it’s time to geek out!

Create a storage pool

After logging into the QuantaStor web UI, you should see something similar to this. I have a single node in my storage grid, but you may not have a grid configured. In that case you’ll only see the node, not the node within the grid. That may be Greek to you at this point if you’re new to QuantaStor, but don’t worry about it as you won’t need it for our activity.

Click on the Storage Pools heading in the left Navigation Pane.

Right-click on the Storage Pools heading or anywhere in the blank space in the Storage Pools section and left-click Create Storage Pool to begin the process of creating a Scale-Up (ZFS-based) storage pool.

Since there are 4x identically sized drives connected to my VM, I’m going to use those for the pool. I left the defaults and checked the boxes for the 4 x 10GB drives, then clicked OK.

You’ll see the status of the task in the Task pane at the bottom of the window. You’ll know it’s done when the status of the Create Storage Pool task is Completed, and you’ll see your pool appear in the Storage Pools section of the Navigation pane.

Create a network share

Now that we have a Storage Pool created we need to create a Network Share in that pool. In the Navigation pane, click on Network Shares heading in the left Navigation Pane.

Right-click the Network Shares heading or the blank space in the Network Shares section and left-click Create Share.

In the Create Network Share dialog I gave my share the name of “globus-data” and left the remaining defaults with the exception of the Share options where I unchecked the “Enable CIFS/SMB Access”. This choice is not to say that you can’t use CIFS/SMB access to your network share if you’re also using Globus. Rather, per my note about identity management above, this choice is just for this blog to keep things simple. Using NFS natively uses the traditional Linux user/permission constructs, while CIFS/SMB introduces ACLs and another layer of identity management and data access permissions. This choice is just for simplicity and to avoid a much more involved identity management discussion. Click OK to continue.

Again, watch the Tasks pane and the Network Shares section of the Navigation pane for completion.

Now, let’s gather a couple pieces of visibility information. Right-click the “globus-data” share you just created and left-click the Properties item.

The two pieces of information that you need to care about are the Export Path and the Share Path. The Export Path is the path you use when mounting the share via NFS to another host. The Share Path is the actual path to where the network share is stored in the storage pool. You’ll for sure use the Share Path later, so make note of it somewhere.

Create a Globus ID

Now that we have the storage layer configured, let’s switch gears and get a Globus ID. Again, there are multiple ways you can configure to authenticate with Globus, especially if you integrate an identity provider. To keep things simple, I’m going to create an ID directly with Globus since I’m not trying to integrate an entire organization and this is a proof of concept.

Head to https://globus.org. Click the LOG IN button at the top-right of the page.

Choose “Globus ID” from the organizational drop-down list, then click Continue.

On the right side, above the line, click the “Need a Globus ID? Sign Up” link.

Fill out the form with your information and click the Create ID button.

Check your email and copy the one-time validation code into the Verification Code field and click Verify.

After successful login, click Continue.

Now that you’ve verified your email, you need to continue your account setup. Click Continue.

Click Allow to give the Globus Web App permission to do its thing.

And finally, you arrive at the File Manager. This is where you can search the Globus network for data.

For our activity, until we are presenting data from QuantaStor to the Globus network there’s nothing more to do here for now. Time to switch gears.

Create a Globus Endpoint

It’s now time to go to the command line. There are two reasons for this. First, from the QuantaStor Container Service integration perspective, we need to validate that the container configuration works. Second, Globus configuration is mainly a command line activity, so we need to provision and configure the Globus objects – the Endpoint, the Storage Gateway and the Collections. Another term you’ll find in the Globus sphere is Data Transfer Node (DTN), which in our case is the container itself.

For Globus, an Endpoint is a single data hierarchy. At first I thought that under an Endpoint you would have multiple DTNs, each sharing a different directory structure. That is NOT the case. All DTNs under an Endpoint share the exact same data hierarchy. More DTNs allows you to scale out your network traffic.

Here’s our plan of attack. There exists a Docker image on Docker Hub that we will ultimately use as our Data Transfer Node. It’s designed and configured to run specifically as a Data Transfer Node, which requires an Endpoint to initialize. So we’re going to use a generic Ubuntu container to install Globus Connect Server and create the Endpoint. After that we’ll switch to using the DTN image.

Time to be productive! Open your favorite terminal and create an SSH connection to your QuantaStor node, then issue sudo -i to switch to the root user and we’ll install GCS and create the endpoint.

### Comments are in yellow              ###
### Highlighted lines should be saved   ###
### Host commands are in pale blue      ###
### Commands in containers are in green ###

root@sj-643f-block-31:/# cd
### Start a new Ubuntu container ###
root@sj-643f-block-31:~# docker run -it --name gcs_endpoint_temp ubuntu:24.04

### Now that we're in the container, install Globus Connect Server ###
root@5b4098033ac6:/# cd
root@5b4098033ac6:~# apt update
... apt processing ...
root@5b4098033ac6:~# DEBIAN_FRONTEND=noninteractive apt install curl vim less
... apt processing ...
root@5b4098033ac6:~# curl -LOs https://downloads.globus.org/globus-connect-server/stable/installers/repo/deb/globus-repo_latest_all.deb
root@5b4098033ac6:~# dpkg -i globus-repo_latest_all.deb
... apt processing ...
root@5b4098033ac6:~# apt install -y globus-connect-server54
... apt processing ...
root@5b4098033ac6:~#

### Now that GCS is installed, create the endpoint.                 ###
### You'll use the Globus ID you created earlier for this activity. ###
root@5b4098033ac6:~# globus-connect-server endpoint setup "QuantaStor Blog" \
    --organization "OSNEXUS" \
    --owner jordahl@globusid.org \
    --contact-email steve.jordahl@osnexus.com
Globus Connect Server uses the Let's Encrypt service to manage
certificates for the web services it installs on your data transfer
nodes. These certificates are issued for DNS domain names operated by
the Globus Project.

Please read the Terms of Service at:
https://letsencrypt.org/repository/
Do you agree to these Terms of Service? [y/N]: y
  0%  Verify owner ID.                                       [#------------------------]    
  5%  00:00:26  Verify owner ID.                             [#------------------------]    
  5%  00:00:26  Verify Auth project admin ID.                [##-----------------------]   
  10%  00:00:25  Verify Auth project admin ID.               [##-----------------------]   
  10%  00:00:25  Perform any necessary Auth flows.           [##-----------------------]   
Please authenticate with Globus to register the new endpoint to get credentials and set the advertised owner of the endpoint:
----------------------------------------------------------------------------------------------------------------------------
https://auth.globus.org/v2/oauth2/authorize?client_id=2ad4d332-81c7-43e9-9c18-4f900cf46746&redirect_uri=https%3A%2F%2Fauth.globus.org%2Fv2%2Fweb%2Fauth-code&scope=urn%3Aglobus%3Aauth%3Ascope%3Aauth.globus.org%3Amanage_projects+urn%3Aglobus%3Aauth%3Ascope%3Atransfer.api.globus.org%3Aset_gcs_attributes&state=_default&response_type=code&access_type=online&prompt=login&session_required_identities=600ea24c-200a-491c-9fce-8e859b7cd19a&session_message=Register+the+new+endpoint+to+get+credentials+and+set+the+advertised+owner+of+the+endpoint
----------------------------------------------------------------------------------------------------------------------------

Enter the resulting Authorization Code here []: POpLySdzjReBN0e23BQ0GEQrReILxH
  15%  00:03:19  Perform any necessary Auth flows.           [###----------------------]   
  15%  00:03:19  Verify client credentials                   [#####--------------------]   
  20%  00:03:07  Verify client credentials                   [#####--------------------]   
  20%  00:03:07  Register the endpoint with Globus Auth      [######-------------------]   
  25%  00:03:05  Register the endpoint with Globus Auth      [######-------------------]   
  25%  00:03:05  Configure deployment key                    [#######------------------]   
  30%  00:02:53  Configure deployment key                    [#######------------------]   
  30%  00:02:53  Configure keychain                          [########-----------------]   
  35%  00:02:32  Configure keychain                          [########-----------------]   
  35%  00:02:32  Initialize Endpoint                         [##########---------------]   
  40%  00:02:21  Initialize Endpoint                         [##########---------------]   
  40%  00:02:21  Add keys to keychain                        [###########--------------]   
  45%  00:02:00  Add keys to keychain                        [###########--------------]   
  45%  00:02:00  Download configuration data                 [############-------------]   
  50%  00:01:49  Download configuration data                 [############-------------]   
  50%  00:01:49  Create Endpoint in Globus                   [#############------------]   
  55%  00:01:32  Create Endpoint in Globus                   [#############------------]   
  55%  00:01:32  Check subscription status                   [###############----------]   
  60%  00:01:22  Check subscription status                   [###############----------]   
  60%  00:01:22  Create owner role                           [################---------]   
  65%  00:01:11  Create owner role                           [################---------]   
  65%  00:01:11  DNS registration (may take a few minutes)   [#################--------]   
  70%  00:01:01  DNS registration (may take a few minutes)   [#################--------]   
  70%  00:01:01  DNS registration                            [##################-------]   
  75%  00:00:49  DNS registration                            [##################-------]   
  75%  00:00:49  Create Let's Encrypt account                [####################-----]   
  80%  00:00:39  Create Let's Encrypt account                [####################-----]   
  80%  00:00:39  Get Certificate (may take a few minutes)    [#####################----]   
  85%  00:00:29  Get Certificate (may take a few minutes)    [#####################----]   
  85%  00:00:29  Get Certificate                             [######################---]   
  90%  00:00:22  Get Certificate                             [######################---]   
  90%  00:00:22  Register with Globus                        [#######################--]   
  95%  00:00:11  Register with Globus                        [#######################--]   
  95%  00:00:11  Set endpoint advertised owner               [#########################]  
  100%  Set endpoint advertised owner                        [#########################]  
Created endpoint 12d5c78e-b895-4abd-aeec-2a38a4e45d73
Endpoint domain_name 230bcf.e229.gaccess.io
No subscription is set on this endpoint, so only basic features are enabled.


To enable subscription features on this endpoint, you must associate your
subscription with this endpoint. If you are not a member of a subscription
group, the Globus subscription manager for your organization can associate a
subscription to this endpoint for you.

If you plan on using the Google Drive or Google Cloud Storage
connectors, use
     https://230bcf.e229.gaccess.io/api/v1/authcallback_google
as the Authorized redirect URI for this endpoint
root@5b4098033ac6:~# ls
deployment-key.json  globus-repo_latest_all.deb

### We'll show the deployment-key.json file here, but we'll be copying ###
### it out, so no need to save at this point.                          ###
root@5b4098033ac6:~# cat deployment-key.json && printf "\n"
{"node_key": {"kty": "RSA", "n": "6lAA8hD5UhKiDV9p9Bs1iPCRVD3MdT47EjATos8MDAEjYjIg02HnlNAeA_Ia70liOBhTp1sH3OuoEskDBC5mYNKJdQLluu4on-u1ztgmxxw3cFyoOzNJ-kaIlHD0r4byKA721a0ZUz1VsOQsFYFC2ECKlisd8eGFcfOCLvB7d2evHzlUmHQSxjtI4zFAv1aPpRWLHgQF87Kx1J93B79A_rD3hAVxgR1Yi-utJzKR_8IZ79X9arvtm31yhnnw7O8uyUy-WfZGECQF0GEHZvoYG4csXVUR3eDYwXweK9nhBXI2T8uztdP91UfprWfg5SfX_vHZUbOq_Rr2NXMn2oreVQ", "e": "AQAB", "d": "GotquAGtRRuASVSMW8-zTmq2hB0mKwgcSBCzQMgE_N0qJYc5Sck3I5g6Nkc4vvAI1QMIgxageuollc847MHW7lQbp2pnHTi62HcrFx5MslTjgPK2SlKiqFxSP8LWLYZzq48abpWYH2J88TfAOMV2jaouKRoEX_ElHYYxMuEik6GvagAddIam4Z0HP048IgI12NHFUUaH3QM7XElqNeuRehoHwO9SrdtEo2cDSqbBIkqOyWfxwAt4UqkPHJ-LkGteXQ8_4Nvj9jgAF1OFQGUIOolyH8fwXDiW9Jz4qnyrGhBzJS4X6lRplk6wipa-F2g-qiZcP3Rt9WqH69fGtDFQ1w", "p": "9yi3454l3wbxFPUL6tGQJt_mXtWiRG2qWbldykvDCYuvtBUStLZyEVwlWn_Uj6j1nwR9FhT9Y2PgtGJtpcK9GkCdRnvqbY_y0x7AfYiZAi2W-T6d-Xb_e63hSEEs3g2DIr2XmJQnmViy6Ib78xWyZ8mN_5rkq7jmaaLSpyF0ej8", "q": "8rGlromneaWsi72L2R5UMqKLa7DPo-utebCU8fE2oUf0s0Waf4OFPw-DVoKKYCij3AjjmIfAJdcWnydjIEok4bZVaFAdkyglzw0mzXzb2VhN7eAdIqMVu7foGL3A51vfqA5_TfVWZrP7iYuK_VayWHSF2gfiGi8I3jWkxtt2ums", "dp": "b0iCI5ZdbuHtQoZi60OYKCi_zQtbmHvYK7XuqNsb4fxnDCpA1eUfzvkySGEuD9D_Zq3atEqXHF0oG5AF1pCsHFnjdozsrJAXwT8jZGJQok5sn6S19FDED6fmu2W9Ee37kXTUAPsUKVNqmo_MeVLXlSuHKANR2o_SDtYlCuNhUnk", "dq": "OPNtEMcmsMoq8mPZdGrEkVlJZE0KfMnqXHsOiLP1AOXUy1jTB4dCdmjahit981C2GwVO-1UnxvlxSonAniwn-XuDEUZzmju6m5rXdzMMmDU7nE2SKLWifPmMEno80U3i7xnvz8h5rQIhTcacKBT3JwC5BFADQ-ezqglmkG-hrd8", "qi": "qE0PGPL-S4m1ogSMq6wrrSMnfaBZJUZRoW7nmlJ0AJXMMwJ3I_EyfrOyjRsGwMu-J1pF-uGUKlor2cEHjoVVJt6vOq8uksD0cjW9WIFlpG3H6HQuksECcAoR4u6YKpPH0tR5bTi9yUtIddbla9Ui3N4f-EpCbLPns_okD52SUz8"}, "client_id": "12d5c78e-b895-4abd-aeec-2a38a4e45d73", "secret": "/FOOvpxS8LNJJenhbMPtTMJVxwVZgpML1sD7KRnViwo="}
root@5b4098033ac6:~# exit
exit
root@sj-643f-block-31:~# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
root@sj-643f-block-31:~# docker container ls -a
CONTAINER ID   IMAGE          COMMAND       CREATED          STATUS                      PORTS     NAMES
5b4098033ac6   ubuntu:24.04   "/bin/bash"   20 minutes ago   Exited (0) 28 seconds ago             gcs_endpoint_temp
root@sj-643f-block-31:~#

The Endpoint has been created. Make sure you copy those highlighted lines as you’ll need them later. The Ubuntu container we just created still exists on your system, but it’s stopped and is not consuming any resources besides storage. If you want to leave it just in case you find a need for it you can. If you want to remove it, you can do that too. It’s pretty easy to recreate if necessary, and we won’t need it anymore for this activity.

Create a Docker ipvlan Network

Because of the requirements GCS has for open, public ports, and because we don’t want to interfere with QuantaStor’s normal use of ports, we’re going to implement a workaround that will allow us to have the container use a separate IP address from the one(s) used by the host. In fact, with this method, the host doesn’t even know the address is being used. Here’s a description:

Docker IPvlan is a network driver that enables containers to connect directly to external networks with fine-grained control over IP addressing and routing.

So now we’ll create a Docker IPvlan Network that we can connect our future DTN instances to. You don’t need your NAT configuration target IP address for this, but you do need to know the specifics of the network your connecting to. You can use ip addr to get the host’s configuration information which, unless you’re off-roading, will be the same info you need to use here.

root@sj-643f-block-31:~# docker network create \
        -d ipvlan \            ### Use the ipvlan driver
        --subnet=10.0.0.0/16 \ ### The network CIDR address
        --gateway=10.0.0.1 \   ### The gateway on your network
        -o ipvlan_mode=l2 \    ### IMPORTANT: USE L2 (lowercase)
        -o parent=ens192 \     ### The interface that already uses this network
        globus-net             ### Name of the network
24fb3b274434ae36dde3ed2623cc14d84cafcdb8bc469bda05b57237db1b5e69
root@sj-643f-block-31:~#

Easy peezy!

Create a Globus Storage Gateway and Collection

Before we try to start the DTN we need to position a few files so that the Container Service is able to start the container on any node that the storage pool can go to. In essence, we need to put metadata somewhere on the storage pool. Remember when we created the network share and I suggested you copy a couple of the share properties? We need the Share Path now. The Share Path looks like this:

/mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/globus-data/

The qs-6968dc9a-d716-8caf-768d-c270180cc023 part is the pool ID. So we need to create a directory somewhere past /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/ in order for it to be part of the pool. Let’s do the metadata work:

### Here we'll copy that deployment-key.json file out of the container. ###
root@sj-643f-block-31:~# docker cp gcs_endpoint_temp:/root/deployment-key.json .
Successfully copied 3.58kB to /root/.
root@sj-643f-block-31:~# ls
deployment-key.json
root@sj-643f-block-31:~#

### We need a file from the Git repo to start the DTN, ###
### mainly because it's designed to use it.            ###
root@sj-643f-block-31:~# git clone https://github.com/globus/globus-connect-server-deploy.git
Cloning into 'globus-connect-server-deploy'...
... git processing ...
root@sj-643f-block-31:~# ls
deployment-key.json  globus-connect-server-deploy

### The GCS image expects configurations to be passed ###
### as environment variables.  We're creating a file  ###
### of environment variables to be passed into the    ###
### container.                                        ###
root@sj-643f-block-31:~# echo "NODE_SETUP_ARGS= --verbose --ip-address 216.207.176.230" > env
root@sj-643f-block-31:~# echo GLOBUS_CLIENT_ID=$(jq -r '.client_id' deployment-key.json) >> env
root@sj-643f-block-31:~# echo GLOBUS_CLIENT_SECRET=$(jq -r '.secret' deployment-key.json) >> env
root@sj-643f-block-31:~# echo DEPLOYMENT_KEY=$(cat deployment-key.json) >> env
root@sj-643f-block-31:~# cat env
NODE_SETUP_ARGS= --verbose --ip-address 216.207.176.230
GLOBUS_CLIENT_ID=12d5c78e-b895-4abd-aeec-2a38a4e45d73
GLOBUS_CLIENT_SECRET=/FOOvpxS8LNJJenhbMPtTMJVxwVZgpML1sD7KRnViwo=
DEPLOYMENT_KEY={"node_key": {"kty": "RSA", "n": "6lAA8hD5UhKiDV9p9Bs1iPCRVD3MdT47EjATos8MDAEjYjIg02HnlNAeA_Ia70liOBhTp1sH3OuoEskDBC5mYNKJdQLluu4on-u1ztgmxxw3cFyoOzNJ-kaIlHD0r4byKA721a0ZUz1VsOQsFYFC2ECKlisd8eGFcfOCLvB7d2evHzlUmHQSxjtI4zFAv1aPpRWLHgQF87Kx1J93B79A_rD3hAVxgR1Yi-utJzKR_8IZ79X9arvtm31yhnnw7O8uyUy-WfZGECQF0GEHZvoYG4csXVUR3eDYwXweK9nhBXI2T8uztdP91UfprWfg5SfX_vHZUbOq_Rr2NXMn2oreVQ", "e": "AQAB", "d": "GotquAGtRRuASVSMW8-zTmq2hB0mKwgcSBCzQMgE_N0qJYc5Sck3I5g6Nkc4vvAI1QMIgxageuollc847MHW7lQbp2pnHTi62HcrFx5MslTjgPK2SlKiqFxSP8LWLYZzq48abpWYH2J88TfAOMV2jaouKRoEX_ElHYYxMuEik6GvagAddIam4Z0HP048IgI12NHFUUaH3QM7XElqNeuRehoHwO9SrdtEo2cDSqbBIkqOyWfxwAt4UqkPHJ-LkGteXQ8_4Nvj9jgAF1OFQGUIOolyH8fwXDiW9Jz4qnyrGhBzJS4X6lRplk6wipa-F2g-qiZcP3Rt9WqH69fGtDFQ1w", "p": "9yi3454l3wbxFPUL6tGQJt_mXtWiRG2qWbldykvDCYuvtBUStLZyEVwlWn_Uj6j1nwR9FhT9Y2PgtGJtpcK9GkCdRnvqbY_y0x7AfYiZAi2W-T6d-Xb_e63hSEEs3g2DIr2XmJQnmViy6Ib78xWyZ8mN_5rkq7jmaaLSpyF0ej8", "q": "8rGlromneaWsi72L2R5UMqKLa7DPo-utebCU8fE2oUf0s0Waf4OFPw-DVoKKYCij3AjjmIfAJdcWnydjIEok4bZVaFAdkyglzw0mzXzb2VhN7eAdIqMVu7foGL3A51vfqA5_TfVWZrP7iYuK_VayWHSF2gfiGi8I3jWkxtt2ums", "dp": "b0iCI5ZdbuHtQoZi60OYKCi_zQtbmHvYK7XuqNsb4fxnDCpA1eUfzvkySGEuD9D_Zq3atEqXHF0oG5AF1pCsHFnjdozsrJAXwT8jZGJQok5sn6S19FDED6fmu2W9Ee37kXTUAPsUKVNqmo_MeVLXlSuHKANR2o_SDtYlCuNhUnk", "dq": "OPNtEMcmsMoq8mPZdGrEkVlJZE0KfMnqXHsOiLP1AOXUy1jTB4dCdmjahit981C2GwVO-1UnxvlxSonAniwn-XuDEUZzmju6m5rXdzMMmDU7nE2SKLWifPmMEno80U3i7xnvz8h5rQIhTcacKBT3JwC5BFADQ-ezqglmkG-hrd8", "qi": "qE0PGPL-S4m1ogSMq6wrrSMnfaBZJUZRoW7nmlJ0AJXMMwJ3I_EyfrOyjRsGwMu-J1pF-uGUKlor2cEHjoVVJt6vOq8uksD0cjW9WIFlpG3H6HQuksECcAoR4u6YKpPH0tR5bTi9yUtIddbla9Ui3N4f-EpCbLPns_okD52SUz8"}, "client_id": "12d5c78e-b895-4abd-aeec-2a38a4e45d73", "secret": "/FOOvpxS8LNJJenhbMPtTMJVxwVZgpML1sD7KRnViwo="}

### Now we'll create a metadata directory and populate it ###
### The entrypoint.sh file is what we needed from the Git ###
### repo.  We'll tweak it a bit later.                    ###
root@sj-643f-block-31:~# mkdir -p /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta
root@sj-643f-block-31:~# cp env /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/
root@sj-643f-block-31:~# cp globus-connect-server-deploy/docker/entrypoint.sh /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/entrypoint.sh

Everything’s in place. Now let’s run the DTN and configure the Storage Gateway and Collection. Let’s talk through the options we’re passing to the Docker run command:

  • The --rm tells Docker to remove the container upon exit
  • The -d says to run in daemon mode (not interactive mode)
  • --name globus_gcs
    Provides the Docker name for the container
  • --hostname globus-dtn
    Provides the hostname INSIDE the container
  • --domainname 94b42a.69a6.gaccess.io
    Provides the domain name INSIDE the container
  • --env-file /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/env
    This is how we map that environment variable file into container environment variables
  • --network globus-net
    Here we’re putting the container on the IPvlan network we created earlier
  • --ip 10.0.18.30
    This is the IP address that is the target of your static NAT configuration
  • -v /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/globus-data:/mnt/host-data
    This maps the network share into the container
  • -v /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/entrypoint.sh:/entrypoint.sh
    This maps the entrypoint.sh file into the container, masking the entrypoint.sh that exists in the container
  • jasonalt/globus-connect-server:ubuntu-24.04-5.4.84
    This is the GCS image that we’ll use. Note that this line is the end of the Docker command. Normally the container ID would be printed, but using the next line we’re piping that ID to another command to get the logs of the container to validate it’s working right.
  • | cut -b 1-12 | xargs -I {} docker logs -f {}
    Here we’re taking the first 12 characters of the container ID and telling the docker logs command to follow the log output
root@sj-643f-block-31:~# docker run --rm -d \
        --name globus_gcs \
        --hostname globus-dtn \
        --domainname 94b42a.69a6.gaccess.io \
        --env-file /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/env \
        --network globus-net \
        --ip 10.0.18.30 \
        -v /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/globus-data:/mnt/host-data \
        -v /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/entrypoint.sh:/entrypoint.sh \
        jasonalt/globus-connect-server:ubuntu-24.04-5.4.84 \
        | cut -b 1-12 | xargs -I {} docker logs -f {}
Unable to find image 'jasonalt/globus-connect-server:ubuntu-24.04-5.4.84' locally
ubuntu-24.04-5.4.84: Pulling from jasonalt/globus-connect-server
... docker downloading and extracting image layers ...
Digest: sha256:760689c37c1718b012fe689bd4ca60ce1900c2a5bd6e295ec9febe700deff6f5
Status: Downloaded newer image for jasonalt/globus-connect-server:ubuntu-24.04-5.4.84
Configuring endpoint

Starting services

Enabling module headers.
To activate the new configuration, you need to run:
  service apache2 restart
Enabling module proxy.
To activate the new configuration, you need to run:
  service apache2 restart
Considering dependency proxy for proxy_http:
Module proxy already enabled
Enabling module proxy_http.
To activate the new configuration, you need to run:
  service apache2 restart
Enabling module rewrite.
To activate the new configuration, you need to run:
  service apache2 restart
Considering dependency mime for ssl:
Module mime already enabled
Considering dependency socache_shmcb for ssl:
Enabling module socache_shmcb.
Enabling module ssl.
See /usr/share/doc/apache2/README.Debian.gz on how to configure SSL and create self-signed certificates.
To activate the new configuration, you need to run:
  service apache2 restart
Enabling site tls-mod-globus.
To activate the new configuration, you need to run:
  service apache2 reload
Launching GCS Manager
Launching GCS Assistant
Launching Apache httpd
Launching GridFTP Server
GCS container successfully deployed
### Use ctrl-C to exit out of the docker logs command.     ###
### Note: this does NOT stop the container                 ###
^C
root@sj-643f-block-31:~#

### Now we exec into the DTN to validate the configuration ###
### and create the storage-gateway and collection.         ###
root@sj-643f-block-31:~# docker exec -it globus_gcs bash

### Installing curl and iproute2 is only for validation.   ###
root@230bcf:/# apt install -y curl iproute2
... apt processing ...

### Validate both the internal and external addresses.     ###
root@230bcf:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
96: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 00:50:56:83:e1:ea brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.18.30/16 brd 10.0.255.255 scope global eth0
       valid_lft forever preferred_lft forever
root@230bcf:/# curl https://ifconfig.me && printf "\n"
216.207.176.230
root@230bcf:/# 

### Now we'll further configure our DTN.                   ###
root@230bcf:/# globus-connect-server login localhost
Please authenticate with Globus here:
------------------------------------
https://auth.globus.org/v2/oauth2/authorize?client_id=37452e37-dae3-44dc-a3a6-19224a4cfd8e&redirect_uri=https%3A%2F%2Fauth.globus.org%2Fv2%2Fweb%2Fauth-code&scope=openid+profile+email+urn%3Aglobus%3Aauth%3Ascope%3Aauth.globus.org%3Aview_identity_set+urn%3Aglobus%3Aauth%3Ascope%3Aauth.globus.org%3Amanage_projects+urn%3Aglobus%3Aauth%3Ascope%3A12d5c78e-b895-4abd-aeec-2a38a4e45d73%3Amanage_collections&state=_default&response_type=code&access_type=offline&prompt=login
------------------------------------

Enter the resulting Authorization Code here: akkShm3bYsmGqHD059nPRvLYWwo9zP
You have successfully logged into GCS endpoint 12d5c78e-b895-4abd-aeec-2a38a4e45d73 at 230bcf.e229.gaccess.io!

### We're using the posix connector, giving it a name and  ###
### choosing the domain (based on where I created my user) ###
root@230bcf:/# globus-connect-server storage-gateway create posix "OSNEXUS Gateway" --domain globusid.org
Storage Gateway ID: fda4238f-e0c6-4e7b-a3bf-396cfc48837e

### Providing the storage gateway, the path to the data    ###
### inside the container and a name                        ###
root@230bcf:/# globus-connect-server collection create fda4238f-e0c6-4e7b-a3bf-396cfc48837e /mnt/host-data/globus-data/ "QuantaStor Collection"
Collection ID: 52a80cc3-7da2-472e-9624-19e65728134d

### Checking our work                                      ###
root@230bcf:/# globus-connect-server storage-gateway list
Display Name    | ID                                   | Connector | High Assurance | MFA  
--------------- | ------------------------------------ | --------- | -------------- | -----
OSNEXUS Gateway | fda4238f-e0c6-4e7b-a3bf-396cfc48837e | POSIX     | False          | False
root@230bcf:/# globus-connect-server collection list
ID                                   | Display Name          | Owner                | Collection Type | Storage Gateway ID                   | Created    | Last Access  
------------------------------------ | --------------------- | -------------------- | --------------- | ------------------------------------ | ---------- | -------------
52a80cc3-7da2-472e-9624-19e65728134d | QuantaStor Collection | jordahl@globusid.org | mapped          | fda4238f-e0c6-4e7b-a3bf-396cfc48837e | 2026-02-03 | Not supported
root@230bcf:/# exit
exit
root@sj-643f-block-31:~#

And we have the components up and running. Let’s test our connection.

Test the container

Back in your web browser, if you’re not still there, navigate to https://app.globus.org, login and make sure the File Manager is selected on the left.

Click in the Search field and type QuantaStor and our collection will automatically come up.

Clicking on the collection comes up with the following “data_access consent”. This is a Globus-side permission thing. Click Continue.

Click Allow to give our account the permissions it needs.

And… then we run into this. This is a problem on our side. Remember previously I mentioned identity mapping? What this is saying is that my Globus id isn’t mapped through to, in this case, a local Linux user in the DTN.

Let’s fix that. We haven’t configured our DTN (the container) to have any normal users that would be mapped to Globus users. Remember that entrypoint.sh file? The reason to pass it into the container every time is so we can easily customize the container. Now if you’re doing something more involved with identities you’d probably want to create a custom Docker image rather than have modifications made every time the container starts. But for this exercise it’s a quick single command, so we can add it to the entrypoint.sh file and be good.

Returning to what I said before about keeping things simple and not going too far into identity management, we’re going to capitalize on one point from the Globus Identity Mapping Guide:

By default, if the storage gateway is configured to allow identities from a single domain, one of the following mappings are done from the user’s identity in the allowed domain to the storage gateway user namespace:

  • For connectors such as [ POSIX, ] when Globus Connect Server maps an identity to an account, it strips off the data after the @ character. So the username user@example.org is mapped to the account user.

The value of this is that because my Globus ID is jordahl@globusid.org, by default the local GCS processes will attempt to get files using a Linux user called “jordahl”. So let’s give it what it wants.

The first thing you’re going to do is modify the file, so here’s the details. Very close to the end of the file you’re going to find this:

echo "GCS container successfully deployed"
while [ $shutting_down -eq 0 ]

You’ll notice that the “GCS container successfully deployed” message is the last message we see from the logs when booting the container. We’ll add our own commands between those two lines. Make it look like the following:

echo "GCS container successfully deployed"

### --- Custom Additions --- ###
echo "Adding user jordahl"
useradd jordahl
### --- End Custom Additions --- ###

while [ $shutting_down -eq 0 ]

After making the changes we’re going to need to stop and start the container.

### Modify the entrypoint.sh file                          ###
root@sj-643f-block-31:~# vi /etc/mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/entrypoint.sh 

### Check the state of the container                       ###
root@sj-643f-block-31:~# docker ps
CONTAINER ID   IMAGE                                                COMMAND            CREATED          STATUS          PORTS     NAMES
32b25a223129   jasonalt/globus-connect-server:ubuntu-24.04-5.4.84   "/entrypoint.sh"   17 minutes ago   Up 17 minutes             globus_gcs

### Stop the container.  Note that because of the activity ###
### with the Globus backend that the container does to     ###
### spin down, we need to give it extra time to complete.  ###
root@sj-643f-block-31:~# docker stop --timeout=-30 globus_gcs
globus_gcs

### Now we start a new container with the new entrypoint   ###
root@sj-643f-block-31:~# docker run --rm -d \
        --name globus_gcs \
        --hostname globus-dtn \
        --domainname 94b42a.69a6.gaccess.io \
        --env-file /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/env \
        --network globus-net \
        --ip 10.0.18.30 \
        -v /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/globus-data:/mnt/host-data \
        -v /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/entrypoint.sh:/entrypoint.sh \
        jasonalt/globus-connect-server:ubuntu-24.04-5.4.84 \
        | cut -b 1-12 | xargs -I {} docker logs -f {}
Configuring endpoint

Starting services

Enabling module headers.
To activate the new configuration, you need to run:
  service apache2 restart
Enabling module proxy.
To activate the new configuration, you need to run:
  service apache2 restart
Considering dependency proxy for proxy_http:
Module proxy already enabled
Enabling module proxy_http.
To activate the new configuration, you need to run:
  service apache2 restart
Enabling module rewrite.
To activate the new configuration, you need to run:
  service apache2 restart
Considering dependency mime for ssl:
Module mime already enabled
Considering dependency socache_shmcb for ssl:
Enabling module socache_shmcb.
Enabling module ssl.
See /usr/share/doc/apache2/README.Debian.gz on how to configure SSL and create self-signed certificates.
To activate the new configuration, you need to run:
  service apache2 restart
Enabling site tls-mod-globus.
To activate the new configuration, you need to run:
  service apache2 reload
Launching GCS Manager
Launching GCS Assistant
Launching Apache httpd
Launching GridFTP Server
GCS container successfully deployed
Adding user jordahl
^C
root@sj-643f-block-31:~# 

Now if we go back and refresh our File Manager page, we see… NOTHING?! Oops… We didn’t put any files in our share.

Let’s fix that.

root@sj-643f-block-31:~# mkdir /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/globus-data/globus-data/
root@sj-643f-block-31:~# cp -r globus-connect-server-deploy/ /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/globus-data/globus-data/

Then go back to the File Manager, refresh and click into the directory. All our files are there.

At this point you have a working QuantaStor Globus Collection, you can use Globus transfer (https://docs.globus.org/guides/tutorials/manage-files/transfer-files/) to securely and reliably transfer files (including very large files which other protocols such as https, scp, and ftp have trouble with) to and from that collection.

So our container works! Now we need to integrate it with QuantaStor. Before that, let’s clean up.

root@sj-643f-block-31:~# docker stop --timeout=30 globus_gcs
globus_gcs

root@sj-643f-block-31:~# docker container ls -a
CONTAINER ID   IMAGE          COMMAND       CREATED             STATUS                      PORTS     NAMES
5b4098033ac6   ubuntu:24.04   "/bin/bash"   About an hour ago   Exited (0) 50 minutes ago             gcs_endpoint_temp
root@sj-643f-block-31:~# 

Create a resource group

Resource Groups in QuantaStor allow us to make associations with different types of objects. In this case, we want to make an association between a network share and a Container Service. Let’s go through the steps.

First, we’ll need to put a config file and a script in place for QuantaStor to use to bring the container up. The first thing you’ll do is to download the files required to enable the Globus Service Config. If you want to download them separately, they’re found here.

root@sj-643f-block-31:~# wget https://github.com/steve-jordahllabs/quantastor-blog-files/raw/refs/heads/main/qs_containerhandler_globus.tgz
--2026-02-04 23:46:17--  https://github.com/steve-jordahllabs/quantastor-blog-files/raw/refs/heads/main/qs_containerhandler_globus.tgz
Resolving github.com (github.com)... 140.82.113.3
Connecting to github.com (github.com)|140.82.113.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/steve-jordahllabs/quantastor-blog-files/refs/heads/main/qs_containerhandler_globus.tgz [following]
--2026-02-04 23:46:18--  https://raw.githubusercontent.com/steve-jordahllabs/quantastor-blog-files/refs/heads/main/qs_containerhandler_globus.tgz
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.108.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4064 (4.0K) [application/octet-stream]
Saving to: ‘qs_containerhandler_globus.tgz’

qs_containerhandler_globus.tgz                       100%[=====================================================================================================================>]   3.97K  --.-KB/s    in 0s      

2026-02-04 23:46:18 (14.8 MB/s) - ‘qs_containerhandler_globus.tgz’ saved [4064/4064]

root@sj-643f-block-31:~# tar xzf qs_containerhandler_globus.tgz 
root@sj-643f-block-31:~# ls -l qs_containerhandler_globus.*
-rw-rw-r-- 1 qadmin qadmin 3865 Feb  1 23:41 qs_containerhandler_globus.conf
-rwxrwxr-x 1 qadmin qadmin 9078 Feb  2 22:11 qs_containerhandler_globus.sh
-rw-r--r-- 1 root   root   4064 Feb  4 23:46 qs_containerhandler_globus.tgz
root@sj-643f-block-31:~# chown root:root qs*
root@sj-643f-block-31:~# chomod -g g-w qs*
root@sj-643f-block-31:~# ls -l qs_containerhandler_globus.*
-rw-r--r-- 1 root root 3865 Feb  1 23:41 qs_containerhandler_globus.conf
-rwxr-xr-x 1 root root 9078 Feb  2 22:11 qs_containerhandler_globus.sh
-rw-r--r-- 1 root root 4064 Feb  4 23:46 qs_containerhandler_globus.tgz
root@sj-643f-block-31:~# cp qs_containerhandler_globus.sh /opt/osnexus/quantastor/bin
root@sj-643f-block-31:~# cp qs_containerhandler_globus.conf /opt/osnexus/quantastor/conf/
root@sj-643f-block-31:~# service quantastor restart

Now that those are in place and the service is restarted, let’s go to the QuantaStor Web UI to create the resource group.

After logging in you’ll be on the Storage Management tab.

Change to the Multitenancy tab (all the way to the right).

On the Resource Group heading or anywhere in the blank Resource Group pane, right-click and select Create Resource Group.

Change the name to “globus_rg”, select Network Share Resource Group (w/ Service Containers) and click OK.

Now you’ll see the globus_rg listed in the Resource Groups

Now in the left pane left-click Services.

Above the main pane you’ll notice that the interface is on the Services tab. Click on the Service Configs tab.

Right-click in the pane and left-click Create Service Config.

Here is where we enter the data that we gathered along the process. Name the service “globus_svc”. Here are the details on the rest of the fields:

  • Public IP Address: The external, public IP address required by Globus Connect Server.
  • Public FQDN: This is the external DNS entry for the public IP address. Technically, it’s not required for this activity, but I added it to store in the metadata to maybe make reference easier.
  • Endpoint Domain: This is the endpoint domain from when we created the Endpoint.
  • Docker Network Name: This is the Docker IPvlan network name that we created.
  • Network Subnet: The CIDR address of the network you’re using.
  • Network Gateway: The gateway on the network you’re using.
  • Parent Network Interface: The physical network interface that the container IPvlan network will use.
  • Metadata Directory Name: This is the name of the metadata directory we created in the storage pool in the .container directory.
  • Container IP Address: This is the internal IP address that your static NAT uses as a target.
  • Container Hostname: This is the hostname WITHIN the container.

Once it’s all filled in, click OK.

Now you’ll see the Service Config listed. Note that you WON’T see it in the Services pane because it’s not a running service, but rather just a configuration that will be used to run a service when it comes time.

Click the Resource Groups section again, right-click on the globus_rg Resource Group and left-click Add/Remove Service Configs.

Check the box next to globus_svc and click OK. The globus_svc will be added to the Resource Group.

Right-click on the globus_rg Resource Group again and this time select Add/Remove Network Share.

Click on the globus-data Network Share, click the right arrow to move it to the right pane. Click OK.

DON’T CLICK YES ON THE CONFIRMATION YET!! Before doing that, let’s start monitoring the qs_service log so we can see what happens as soon as you add the Network Share to the Resource Group that contains a Service Config. In your terminal SSH session run tail -fn 0 /var/log/qs/qs_service.log and then click Yes.

From the log section below you can see what’s happening as soon as QuantaStor starts the container.

root@sj-643f-block-31:~# tail -fn 0 /var/log/qs/qs_service.log
{Thu Feb  5 00:28:07 2026, INFO, 47fff640:cmn_task_manager:1347} post-task activation: Update Resource Associations, 540240ea-4eef-4b26-111b-62509e975a84
{Thu Feb  5 00:28:07 2026, INFO, 47fff640:cmn_task_manager:1372} task complete: Update Resource Associations, 540240ea-4eef-4b26-111b-62509e975a84
{Thu Feb  5 00:28:07 2026, INFO, 47fff640:cmn_task_manager:1392} task removed: Update Resource Associations, 540240ea-4eef-4b26-111b-62509e975a84
{Thu Feb  5 00:28:08 2026, INFO, 4e1fb640:{container-manager}:container_service_universal:368} Saving config data for container service 'globus' with '10' specified options to configuration name 'globus_svc' to file '/var/run/quantastor/containers/globus_config_options.57f69666-6802-15fe-2485-aae7e311b434.conf'
{Thu Feb  5 00:28:08 2026, INFO, 4e1fb640:{container-manager}:container_service_base:363} Mounting share 'globus-data' to the share bind path '/mnt/containers/12d91c9b-2350-a5de-5389-d19827a393dc/globus/57f69666-6802-15fe-2485-aae7e311b434/shares/globus-data' with command: '/bin/mount --rbind --make-rslave /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/globus-data /mnt/containers/12d91c9b-2350-a5de-5389-d19827a393dc/globus/57f69666-6802-15fe-2485-aae7e311b434/shares/globus-data'
{Thu Feb  5 00:28:08 2026, INFO, 4e1fb640:{container-manager}:container_service_base:411} Shares have been successfully mounted for service with configuration name 'globus_svc'
{Thu Feb  5 00:28:08 2026, INFO, 4e1fb640:{container-manager}:container_manager:750} Pulling latest Docker Image to prepare for Container with Configuration 'globus_svc' for Resource Group 'globus_rg'.
{Thu Feb  5 00:28:08 2026, INFO, 4e1fb640:{container-manager}:container_service_base:104} Container Image for 'jasonalt/globus-connect-server:ubuntu-24.04-5.4.84' is already present, skipping.
{Thu Feb  5 00:28:08 2026, INFO, 4e1fb640:{container-manager}:container_manager:755} Starting container from config 'globus_svc', with service definition 'Globus Connect Server' tag 'globus' associated with Resource Group 'globus_rg'
{Thu Feb  5 00:28:08 2026, INFO, 4e1fb640:{container-manager}:container_service_universal:231} Starting 'globus' service at '/opt/osnexus/quantastor/bin/qs_containerhandler_globus.sh' with command: '/opt/osnexus/quantastor/bin/qs_containerhandler_globus.sh run-container  --container "globus-57f696-12d91c" --bindaddrs 10.0.18.31 --bindsharessrc "/mnt/containers/12d91c9b-2350-a5de-5389-d19827a393dc/globus/57f69666-6802-15fe-2485-aae7e311b434/shares/" --bindsharesdest "/mnt/host-data" --containerimage jasonalt/globus-connect-server:ubuntu-24.04-5.4.84 --confpath "/var/run/quantastor/containers/globus_config_options.57f69666-6802-15fe-2485-aae7e311b434.conf" --ccid 57f69666-6802-15fe-2485-aae7e311b434 --options "container_hostname:globus-dtn,container_ip:10.0.18.30,endpoint_domain:230bcf.e229.gaccess.io,metadata_dir:globus-meta,network_gateway:10.0.0.1,network_name:globus-net,network_subnet:10.0.0.0/16,parent_interface:ens192,public_fqdn:globus.osnexus.com,public_ip:216.207.176.230"'

{Thu Feb  5 12:28:09 AM UTC 2026, INFO, qs_containerhandler_globus} Docker network 'globus-net' already exists. Using existing network.

{Thu Feb  5 12:28:09 AM UTC 2026, INFO, qs_containerhandler_globus} Constructed metadata path: /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta

{Thu Feb  5 12:28:09 AM UTC 2026, INFO, qs_containerhandler_globus} Generating environment file at: /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/env

{Thu Feb  5 12:28:09 AM UTC 2026, INFO, qs_containerhandler_globus} Mounting entrypoint.sh from /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/entrypoint.sh

{Thu Feb  5 12:28:09 AM UTC 2026, INFO, qs_containerhandler_globus} Running container 'globus-57f696-12d91c' with command '/usr/bin/docker run --restart=always -d --net globus-net --ip 10.0.18.30 --hostname globus-dtn --domainname 230bcf.e229.gaccess.io --name globus-57f696-12d91c --env-file /mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/env --mount type=bind,source=/mnt/containers/12d91c9b-2350-a5de-5389-d19827a393dc/globus/57f69666-6802-15fe-2485-aae7e311b434/shares/,target=/mnt/host-data,bind-propagation=rslave --mount type=bind,source=/mnt/storage-pools/qs-6968dc9a-d716-8caf-768d-c270180cc023/.container/globus-meta/entrypoint.sh,target=/entrypoint.sh,readonly jasonalt/globus-connect-server:ubuntu-24.04-5.4.84'.
{Thu Feb  5 00:28:09 2026, INFO, 4e1fb640:{container-manager}:container_service_universal:268} Running container postrun sequence for container 'globus' service at '/opt/osnexus/quantastor/bin/qs_containerhandler_globus.sh' with command: '/opt/osnexus/quantastor/bin/qs_containerhandler_globus.sh postrun-container  --container "globus-57f696-12d91c" --bindaddrs 10.0.18.31 --bindsharessrc "/mnt/containers/12d91c9b-2350-a5de-5389-d19827a393dc/globus/57f69666-6802-15fe-2485-aae7e311b434/shares/" --bindsharesdest "/mnt/host-data" --containerimage jasonalt/globus-connect-server:ubuntu-24.04-5.4.84 --confpath "/var/run/quantastor/containers/globus_config_options.57f69666-6802-15fe-2485-aae7e311b434.conf" --ccid 57f69666-6802-15fe-2485-aae7e311b434 --options "container_hostname:globus-dtn,container_ip:10.0.18.30,endpoint_domain:230bcf.e229.gaccess.io,metadata_dir:globus-meta,network_gateway:10.0.0.1,network_name:globus-net,network_subnet:10.0.0.0/16,parent_interface:ens192,public_fqdn:globus.osnexus.com,public_ip:216.207.176.230"'
{Thu Feb  5 00:28:09 2026, INFO, 4e1fb640:{container-manager}:container_service_universal:271} Successfully started 'globus' service instance 'globus-57f696-12d91c' with configuration name 'globus_svc'
{Thu Feb  5 00:28:09 2026, INFO, 4e1fb640:{container-manager}:container_manager:768} Successfully started Service with Configuration 'globus_svc' for Resource Group 'globus_rg'
^C

Validate Globus connectivity

Now let’s go back to the Globus web UI and see our data.

You’ll notice it looks a little different. Without getting too deep, QuantaStor actually uses a different mount point for the data than we did. The path it uses is one directory higher, so it shows the globus-data directory we created. We created that directory so we wouldn’t have to change the mountpoint inside the container. If you click into it you’ll see the same data we saw previously.

Summing it Up

Using the Globus Service Container in QuantaStor simplifies setting up and connecting your data to the Globus network. It enables secure and efficient file transfer and sharing between your local storage and the Globus network. The integration allows researchers, institutions, and data teams to easily expose their QuantaStor storage as a Globus endpoint, facilitating high-speed data movement across distributed environments.

I’d love to hear from you – the good, the bad and the ugly. Your feedback is always welcome.

Useful Resources

Files Used in This Blog:

QuantaStor Links:

Globus Links:

Docker Links:

Useful Globus Connect Server Commands:

  • 1. Create Endpoint Command:
    • globus-connect-server endpoint setup "{endpoint name}" \
      --organization "{org name}" \
      --owner {Globus recognized email id} \
      --contact-email {your email address}
  • 1. Important output information:
    • Endpoint ID
    • Endpoint domain name
    • deployment-key.json
  • 2. Globus Login Command:
    • globus-connect-server login localhost
      This command will give you a URL to visit to get an Authorization Code to complete sign-in.
  • 3. Create Storage Gateway Command:
    • globus-connect-server storage-gateway create {connector} "{gateway name}" --domain {identity domain}
  • 3. Important output information:
    • Storage Gateway ID
  • 4. Create Collection Command:
    • globus-connect-server collection create {storage gateway id} {data path} "{collection name}"
  • 4. Important output information:
    • Collection ID
  • 5. Delete Endpoint Commands:
    • gcs node cleanup; gcs endpoint cleanup -d deployment-key.json

Source code:

───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       File: qs_containerhandler_globus.conf
───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1    [global]
   2    # 'universal' type container service definition allows for dynamic definition of a containerized service
   3    type=universal
   4    # Friendly name for this containerized service
   5    name=Globus Connect Server
   6    # Container framework
   7    container_type=docker
   8    # Container image name for this service
   9    container_image=jasonalt/globus-connect-server:ubuntu-24.04-5.4.84
  10    # Service tag by which this service is identified
  11    tag=globus
  12    # where to map the bind mounts of the shares to for this container service
  13    map_shares_dest="/mnt/host-data"
  14    # script to handle the container lifecycle events for this service such as 'docker run' and post run actions
  15    container_handler_script="/opt/osnexus/quantastor/bin/qs_containerhandler_globus.sh"
  16    # Path to the file where the configuration options are saved for a given service definition
  17    config_options_output_file="/var/run/quantastor/containers/globus_config_options.{CCID}.conf"
  18    # Path to the file where pool-based metadata is saved for a given service definition
  19    # config_metadata_directory="/mnt/storage-pools/qs-{POOL_ID}/.container/{metadata_dir}"
  20    
  21    config_option:public_ip:index=0
  22    config_option:public_ip:type=string_field
  23    config_option:public_ip:mandatory=true
  24    config_option:public_ip:label=Public IP Address:
  25    config_option:public_ip:description=The public IP address mapped to this Globus endpoint (if behind NAT).
  26    
  27    config_option:public_fqdn:index=1
  28    config_option:public_fqdn:type=string_field
  29    config_option:public_fqdn:mandatory=true
  30    config_option:public_fqdn:label=Public FQDN:
  31    config_option:public_fqdn:description=The public Fully Qualified Domain Name for this endpoint.
  32    
  33    config_option:endpoint_domain:index=2
  34    config_option:endpoint_domain:type=string_field
  35    config_option:endpoint_domain:mandatory=true
  36    config_option:endpoint_domain:label=Endpoint Domain:
  37    config_option:endpoint_domain:description=The domain name for the endpoint (e.g. 63b47a.73a2.gaccess.io).
  38    
  39    config_option:network_name:index=3
  40    config_option:network_name:type=string_field
  41    config_option:network_name:mandatory=true
  42    config_option:network_name:label=Docker Network Name:
  43    config_option:network_name:description=Name of the Docker network (e.g. globus-net).
  44    
  45    config_option:network_subnet:index=4
  46    config_option:network_subnet:type=string_field
  47    config_option:network_subnet:mandatory=true
  48    config_option:network_subnet:label=Network Subnet:
  49    config_option:network_subnet:description=The subnet for the Docker network (e.g. 192.168.1.0/24).
  50    
  51    config_option:network_gateway:index=5
  52    config_option:network_gateway:type=string_field
  53    config_option:network_gateway:mandatory=true
  54    config_option:network_gateway:label=Network Gateway:
  55    config_option:network_gateway:description=The gateway for the Docker network (e.g. 192.168.1.1).
  56    
  57    config_option:parent_interface:index=6
  58    config_option:parent_interface:type=string_field
  59    config_option:parent_interface:mandatory=true
  60    config_option:parent_interface:label=Parent Network Interface:
  61    config_option:parent_interface:description=The host network interface to bridge to (e.g. eth0 or bond0).
  62    
  63    config_option:metadata_dir:index=7
  64    config_option:metadata_dir:type=string_field
  65    config_option:metadata_dir:mandatory=true
  66    config_option:metadata_dir:label=Metadata Directory Name:
  67    config_option:metadata_dir:description=Directory name to be created inside the share for Globus metadata/config.
  68    
  69    config_option:container_ip:index=8
  70    config_option:container_ip:type=string_field
  71    config_option:container_ip:mandatory=true
  72    config_option:container_ip:label=Container IP Address:
  73    config_option:container_ip:description=Static IP address for the container on the Docker network.
  74    
  75    config_option:container_hostname:index=9
  76    config_option:container_hostname:type=string_field
  77    config_option:container_hostname:mandatory=true
  78    config_option:container_hostname:label=Container Hostname:
  79    config_option:container_hostname:description=Hostname for the container.
───────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       File: qs_containerhandler_globus.sh
───────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1    #!/usr/bin/env bash
   2    # qs_containerhandler_globus.sh - Globus Connect Server container handler
   3    
   4    # The following two lines are used for debugging purposes
   5    # exec 1>/var/log/qs/debug_globus_container 2>&1
   6    # set -x
   7    
   8    ##############################################################################
   9    # ------------------- script-specific constants ---------------------------- #
  10    ##############################################################################
  11    CONTAINERTAG="globus"
  12    CONTAINERDESC="Globus Connect Server Handler"
  13    VERSION="v1.1"
  14    COPYRIGHT="Copyright (c) 2026 OSNexus Corporation"
  15    
  16    APPNAME="qs_containerhandler_${CONTAINERTAG}"
  17    APPDESC="$APPNAME $VERSION - $CONTAINERDESC"
  18    
  19    ##############################################################################
  20    # ---------------- import container handler shared functions --------------- #
  21    ##############################################################################
  22    
  23    LIB_DIR="$(cd -- "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
  24    # shellcheck source=qs_containerlib.sh
  25    source "${LIB_DIR}/qs_containerlib.sh"
  26    
  27    ##############################################################################
  28    # -------------------------- container specific logic ---------------------- #
  29    ##############################################################################
  30    
  31    usage()
  32    {
  33      printf "\nExamples:\n\n"
  34      printf "    $APPNAME.sh run-container -c globus-service -s /mnt/storage-pools/qs-ID/globus-share -d /data -i jasonalt/globus-connect-server:latest -o \"public_ip:1.2.3.4,public_fqdn:globus.example.com
        ,endpoint_domain:foo.gaccess.io,network_name:globus-net,network_subnet:192.168.99.0/24,network_gateway:192.168.99.1,parent_interface:eth0,metadata_dir:globus-meta,container_ip:192.168.99.10,container_ho
        stname:globus-node\"\n"
  35      printf "\n"
  36    }
  37    
  38    get_context_from_bindsrc() {
  39        # BINDSRC format: /mnt/containers/{rg_id}/globus/{svc_id}/shares/
  40        RESGRPID=$(echo "$BINDSRC" | cut -d'/' -f4)
  41        SVCID=$(echo "$BINDSRC" | cut -d'/' -f6)
  42    
  43        # Get Share ID from Resource Group
  44        SHAREID=$(qs resource-group-get --resource-group="$RESGRPID" --xml | xmllint --xpath "//resourceAssocList[objectType='53']/objectId/text()" -)
  45    
  46        # Get Mount Path from Share
  47        # Output format: "qs-{vol_id}/{share_name}"
  48        # We need raw output, so jq -r
  49        MOUNTPATH_RAW=$(qs share-get --share="$SHAREID" --json | jq -r '.mountPath')
  50    
  51        # Extract VOLID (Pool ID) and SHARENAME
  52        # MOUNTPATH_RAW example: qs-12345/my-share
  53        VOLID_STR=$(echo "$MOUNTPATH_RAW" | cut -d'/' -f1) # qs-12345
  54        SHARENAME=$(echo "$MOUNTPATH_RAW" | cut -d'/' -f2) # my-share
  55    
  56        # Strip 'qs-' from VOLID_STR to get raw ID
  57        VOLID=${VOLID_STR#qs-}
  58    
  59        # Export for use in main scope
  60        export RESGRPID SVCID SHAREID VOLID SHARENAME
  61    }
  62    
  63    # Required: run-container is executed to start the container
  64    run-container()
  65    {
  66        # Initialize variables
  67        PUBLIC_IP=""
  68        PUBLIC_FQDN=""
  69        ENDPOINT_DOMAIN=""
  70        NETWORK_NAME=""
  71        NETWORK_SUBNET=""
  72        NETWORK_GATEWAY=""
  73        PARENT_INTERFACE=""
  74        METADATA_DIR=""
  75        CONTAINER_IP=""
  76        CONTAINER_HOSTNAME=""
  77    
  78        # Parse Options passed from QuantaStor
  79        IFS=',' read -ra OPT_ARR <<< "$OPTIONS"
  80        for opt in "${OPT_ARR[@]}"; do
  81            case "$opt" in
  82                public_ip:*) PUBLIC_IP="${opt#public_ip:}" ;;
  83                public_fqdn:*) PUBLIC_FQDN="${opt#public_fqdn:}" ;;
  84                endpoint_domain:*) ENDPOINT_DOMAIN="${opt#endpoint_domain:}" ;;
  85                network_name:*) NETWORK_NAME="${opt#network_name:}" ;;
  86                network_subnet:*) NETWORK_SUBNET="${opt#network_subnet:}" ;;
  87                network_gateway:*) NETWORK_GATEWAY="${opt#network_gateway:}" ;;
  88                parent_interface:*) PARENT_INTERFACE="${opt#parent_interface:}" ;;
  89                metadata_dir:*) METADATA_DIR="${opt#metadata_dir:}" ;;
  90                container_ip:*) CONTAINER_IP="${opt#container_ip:}" ;;
  91                container_hostname:*) CONTAINER_HOSTNAME="${opt#container_hostname:}" ;;
  92            esac
  93        done
  94    
  95        # Populate context variables (VOLID, SHAREID, etc.) using qs CLI
  96        get_context_from_bindsrc
  97    
  98        # Network Configuration
  99        if [[ -n "$NETWORK_NAME" ]]; then
 100            # Check if network exists
 101            if ! docker network inspect "$NETWORK_NAME" >/dev/null 2>&1; then
 102                logservice "INFO" "Creating Docker network '$NETWORK_NAME' (ipvlan) on interface '$PARENT_INTERFACE'..."
 103                
 104                # Create ipvlan network with specified parameters
 105                # Note: Using ipvlan allows the container to have its own IP on the physical network
 106                docker network create -d ipvlan \
 107                    --subnet="$NETWORK_SUBNET" \
 108                    --gateway="$NETWORK_GATEWAY" \
 109                    -o parent="$PARENT_INTERFACE" \
 110                    "$NETWORK_NAME"
 111                    
 112                if [[ $? -ne 0 ]]; then
 113                    logservice "ERROR" "Failed to create Docker network '$NETWORK_NAME'."
 114                    exit 1
 115                fi
 116            else
 117                 logservice "INFO" "Docker network '$NETWORK_NAME' already exists. Using existing network."
 118            fi
 119        fi
 120    
 121        # Metadata Directory Setup & Env File Generation
 122        if [[ -n "$METADATA_DIR" && -n "$BINDSRC" ]]; then
 123            # Use VOLID (Pool ID) extracted from the share context
 124            METADATA_PATH="/mnt/storage-pools/qs-${VOLID}/.container/${METADATA_DIR}"
 125            logservice "INFO" "Constructed metadata path: $METADATA_PATH"
 126    
 127            if [[ ! -d "$METADATA_PATH" ]]; then
 128                logservice "ERROR" "Metadata directory not found at: $METADATA_PATH"
 129                exit 1
 130            fi
 131    
 132            DEPLOYMENT_KEY_FILE="$METADATA_PATH/deployment-key.json"
 133            if [[ ! -f "$DEPLOYMENT_KEY_FILE" ]]; then
 134                logservice "ERROR" "deployment-key.json not found at: $DEPLOYMENT_KEY_FILE"
 135                exit 1
 136            fi
 137    
 138            # Extract Client ID and Secret using Python
 139            if command -v jq &>/dev/null; then
 140                CLIENT_ID=$(jq -r .client_id "$DEPLOYMENT_KEY_FILE")
 141                CLIENT_SECRET=$(jq -r .secret "$DEPLOYMENT_KEY_FILE")
 142            else
 143                logservice "ERROR" "jq is required to parse deployment-key.json but was not found."
 144                exit 1
 145            fi
 146            
 147            # Read the full content of the key file
 148            DEPLOYMENT_KEY_CONTENT=$(cat "$DEPLOYMENT_KEY_FILE")
 149    
 150            # Create the env file
 151            ENV_FILE="$METADATA_PATH/env"
 152            logservice "INFO" "Generating environment file at: $ENV_FILE"
 153    
 154            {
 155                echo "NODE_SETUP_ARGS=--ip-address $PUBLIC_IP"
 156                echo "GLOBUS_CLIENT_ID=$CLIENT_ID"
 157                echo "GLOBUS_CLIENT_SECRET=$CLIENT_SECRET"
 158                echo "DEPLOYMENT_KEY=$DEPLOYMENT_KEY_CONTENT"
 159            } > "$ENV_FILE"
 160    
 161        else
 162            logservice "ERROR" "Metadata Directory or Bind Source not specified. Cannot proceed."
 163            exit 1
 164        fi
 165    
 166        # Build docker run command
 167        DOCKER_CMD=(/usr/bin/docker run --restart=always -d)
 168        
 169        # Network settings
 170        if [[ -n "$NETWORK_NAME" ]]; then
 171            DOCKER_CMD+=(--net "$NETWORK_NAME")
 172        fi
 173        
 174        if [[ -n "$CONTAINER_IP" ]]; then
 175            DOCKER_CMD+=(--ip "$CONTAINER_IP")
 176        fi
 177        
 178        if [[ -n "$CONTAINER_HOSTNAME" ]]; then
 179            DOCKER_CMD+=(--hostname "$CONTAINER_HOSTNAME")
 180        fi
 181        
 182        if [[ -n "$ENDPOINT_DOMAIN" ]]; then
 183            DOCKER_CMD+=(--domainname "$ENDPOINT_DOMAIN")
 184        fi
 185    
 186        if [[ -n "$CONTAINERNAME" ]]; then
 187            DOCKER_CMD+=(--name "$CONTAINERNAME")
 188        fi
 189    
 190        # ADD ENV FILE
 191        if [[ -f "$ENV_FILE" ]]; then
 192            DOCKER_CMD+=(--env-file "$ENV_FILE")
 193        fi
 194    
 195        # Add share bind mount
 196        if [[ -n "$BINDSRC" && -n "$BINDDST" ]]; then
 197            DOCKER_CMD+=(--mount "type=bind,source=$BINDSRC,target=$BINDDST,bind-propagation=rslave")
 198        fi
 199        
 200        # Add entrypoint script bind mount (Read-Only)
 201        # We verify the file exists in the METADATA_PATH before trying to mount it
 202        if [[ -n "$METADATA_PATH" && -f "$METADATA_PATH/entrypoint.sh" ]]; then
 203            logservice "INFO" "Mounting entrypoint.sh from $METADATA_PATH/entrypoint.sh"
 204            DOCKER_CMD+=(--mount "type=bind,source=$METADATA_PATH/entrypoint.sh,target=/entrypoint.sh,readonly")
 205        fi
 206    
 207        # Add container name and image
 208        [[ -n "$CONTAINER" ]] && DOCKER_CMD+=(--name "$CONTAINER")
 209        DOCKER_CMD+=("$CONTAINERIMAGE")
 210    
 211        # Execution Block
 212        if [[ "$DRY_RUN" == "1" ]]; then
 213            echo "TEST MODE: Action [Run Container]"
 214            echo "COMMAND: ${DOCKER_CMD[*]}"
 215        else
 216            if [[ "$VERBOSE" == "1" || "$DEBUG" == "1" ]]; then
 217                logservice "DEBUG" "${DOCKER_CMD[*]}"
 218            fi
 219    
 220            logservice "INFO" "Running container '${CONTAINERNAME}' with command '${DOCKER_CMD[*]}'."
 221            "${DOCKER_CMD[@]}"
 222        fi
 223    }
 224    
 225    ##############################################################################
 226    # ----------------------------- main entry --------------------------------- #
 227    ##############################################################################
 228    
 229    ARGS=()
 230    DRY_RUN=0
 231    for arg in "$@"; do
 232        if [[ "$arg" == "--test" ]]; then
 233            DRY_RUN=1
 234        else
 235            ARGS+=("$arg")
 236        fi
 237    done
 238    set -- "${ARGS[@]}"
 239    
 240    parse_cli OPERATION "$@"
 241    dispatch "$OPERATION" run-container usage version

About The Author

Podcast also available on PocketCasts, SoundCloud, Spotify, Google Podcasts, Apple Podcasts, and RSS.

Leave a Reply

Discover more from OSNexus Official Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading