User home directory storage#
All users on all the hubs get a home directory with persistent storage.
This is made available through cloud-specific filestores and sometimes through a manually-managed Network File System (NFS) (usually this is the case on GCP clusters that are very price sensitive).

NFS Server provisioned setup#
Terraform is setup to provision the in-cluster NFS server using the following cloud specific implementations.
GCP: Google Filestore
Azure: Files
AWS: Elastic File System
NFS Server setup - only for manually managed servers#
Warning
No longer the default option!
Use this as a last resort only on GCP clusters that are very price sensitive. Use the cloud-specific filestore as default instead.
Checkout howto:manual-nfs-setup
Some of the 2i2c clusters has a NFS server that is usually located at nfs-server-01
. This is currently hand configured, so it might change in the future. This NFS Server has a persistent disk that’s independent from rest of the VM (it can be grown / snapshotted independently). This disk is mounted inside the NFS server at /export/home-01
(for the home directories of users) and is made available via NFS to be mounted by everything in the cluster, via /etc/exports
:
/export/home-01 10.0.0.0/8(all_squash,anonuid=1000,anongid=1000,no_subtree_check,rw,sync)
Note
To SSH into the NFS server run:
gcloud compute ssh nfs-server-01 --zone=us-central1-b
NFS Client setup#
For each hub, there needs to be a:
Hub directory#
A directory is created under the path defined by the nfs.pv.baseShareName
cluster config.
Usually, this is:
/homes
- for hubs that have in-cluster NFS storageCreated using the infrastructure described in the terraform section.
/export/home-01/homes
- for hubs that have the NFS server deployed manually
This the the base directory under which each hub has a directory (nfs.pv.baseShareName
).
This is done through a job that’s created for each deployment via helm hooks that will mount nfs.pv.baseShareName
, and make sure the directory for the hub is present on the NFS server with appropriate permissions.
Note
The NFS share creator job will be created pre-deploy, run, and cleaned up before deployment proceeds. Ideally, this would only happen once per hub setup - but we don’t have a clear way to do that yet.
Hub user mount#
For each hub, a PersistentVolumeClaim(PVC) and a PersistentVolume(PV) are created.
This is the Kubernetes Volume that refers to the actual storage on the NFS server.
The volume points to the hub directory created for the hub and user at <hub-directory-path>/<hub-name>/<username>
(this name is dynamically determined as a combination of nfs.pv.baseShareName
and the current release name).
Z2jh then mounts the PVC on each user pod as a volume named home.
Parts of the home volume are mounted in different places for the users:
-
Z2jh will mount into
/home/jovyan
(the mount path) the contents of the path<hub-directory-path>/<hub-name>/<username>
on the NFS storage server. Note that<username>
is specified as asubPath
- the subdirectory in the volume to mount at that given location. shared directories
-
Mounted for all users, showing the contents of
<hub-directory-path>/<hub-name>/_shared
. This mount is readOnly and users can’t write to it. -
Mounted just for admins, showing the contents of
<hub-directory-path>/<hub-name>/_shared
. This volumeMount is NOT readonly, so admins can write to it.Note
This feature comes from the custom KubeSpawner that the our community hubs use, that allows providing extra configuration for admin users only.
the
allusers
directory - optionalCan be mounted just for admins, showing the contents of
<hub-directory-path>/<hub-name>/
. This volumeMount is NOT readonly, so admins can write to it. It’s purpose is to give access to the hub admins to all the users home directory to read and modify.jupyterhub: custom: singleuserAdmin: extraVolumeMounts: - name: home mountPath: /home/jovyan/allusers
-