@jhamman just pointed me here, and I did a bit of research on how you can enforce quotas on NFS file systems.
Traditionally, you would back your NFS with a filesystem that supports quotas (often XFS), and use that to enforce quotas. XFS is particularly popular since it supports quotas per directory (which is what we want) rather than just per-user.
Most managed NFS stores (EFS, Google Filestore, etc) do not allow us to set these options, however. So we will have to run our own NFS server. Which isn’t hard, but not something you really wanna do. If so, I’d prefer we run it in our kubernetes cluster itself.
The NFS Server Provisioner seems to have all the things we need to get this to work.
- Install it (with this helm chart)
- Back it with a EBS volume / Google Cloud Persistent disk formatted as XFS (specifying
fsType: xfs in that PersistentVolumeClaim)
- Then turn on xfs quotas (this functionality needs to be exposed in the helm chart, even though it exists in the project)
- Use dynamic provisioning for the user pods, with StorageClass set to NFS
This would provision a PVC for each user, and set a disk size quota we ask for. I think we can change the quota later, but that would need to be checked. This NFS server would then run on a core node - if the node goes down you’ll have downtime. This needs to be kept in mind when you are upgrading nodes / moving nodepools.
It would be awesome to find someone with time to try it out and report things back.