JupyterHub access on Cheyenne

JupyterHub access on Cheyenne

Step 1: Navigate to https://jupyterhub.ucar.edu/.

Step 2: Launch your server, select from:
- Cheyenne Supercomputer (i.e., the batch nodes)
- Casper DAV (Analysis machine)
You may have to click “Launch Server” after selecting your machine:

Step 3: Authenticate with either your yubikey or your DUO mobile app

Step 4: Specify:
- Queue (reservation or Slurm QoS)
- Project account number
- Wall time

Step 5: Click “Spawn.”

What to do if JupyterHub is down or unstable (The Murphy’s Law section):

In case you are having issues with jupyterhub.ucar.edu, we’ve provided utility scripts for launching JupyterLab on both Cheyenne and Casper via SSH Tunneling:

Step 1: SSH into Cheyenne or Casper:

ssh username@cheyenne.ucar.edu 
ssh username@casper.ucar.edu 

Step 2: Clone repository with utility scripts:

git clone https://github.com/NCAR/ncar-python-tutorial.git

Step 3: Submit Job and Launch Jupyter Server

Note: This step requires having conda on your path, and jupyter installed in your base conda environment. In order to launch the jupyter server from a different environment, make sure to run conda activate ENV_NAME where ENV_NAME=your environment name prior running the following commands:

cd ncar-python-tutorial
./setup/jlab-ch # on Cheyenne


./setup/jlab/jlab-dav # On Casper

For custom configuration, you can use --help option to see the available options

$ ./setup/jlab/jlab-ch --help
Usage: launch Jupyter server
Possible options are:
 -a,--account: account
 -w,--walltime: walltime [default: 06:00:00]
 -q,--queue: queue [default: share]
 -m,--memory: memory request [default: 8GB]
 -d,--directory: notebook directory
 -p,--port: [default: 8888]
 --matlab: run matlab kernel
 --matlab-version: matlab version [default: R2018a]

Step 4: If step 3 is successful, you will see instructions along the lines:

Execute on local machine:
ssh -N -L 8999:localhost:8999 username@casperxx.ucar.edu
Open a browser on your local machine and type in the address bar:
- Open a new terminal and run the `ssh command` listed under `Execute on local machine` section
- Open your favorite browser (chrome, firefox, etc..) and head over to the address listed under `Open a browser on your local machine ….` section

@matt-long - thanks for these great instructions.

You don’t mention much about the software environment. Is there a “Pangeo” kernel already available. Should we have a discussion about what packages to include? Or are users on their own to configure their environment?

I would like to make sure the environments are as uniform as possible between cloud and cheyenne.

We’re still working on the kernel. Will update with details when available.

1 Like

Our hope is to develop a kernel that enables people to get working right away and covers just about everything. It is also possible for user to create their own conda environments.

1 Like

I think we should be brainstorming a list of packages collaboratively. We can start from pangeo-stacks:

We might want to create a new environment and docker image there, export the environment, and rebuild it on cheyenne.

@jhamman - what do you think about that plan?

This is exactly my plan. Please comment on this PR with specific additions: https://github.com/pangeo-data/pangeo-stacks/pull/84

1 Like

Here are few important details to keep in mind.

Cheyenne vs Casper

The Cheyenne supercomputer has a lot of compute power, but can sometimes be unstable and the batch nodes do not have Internet access. This can be problematic when packages like cartopy attempt to download assets on the file.

Casper, otherwise known as the DAV system, does have Internet access.

Cheyenne uses the PBS queuing system; Casper uses Slurm.

Project number

We have an allocation for the Hackathon on Cheyenne. The project number is


We have reserved 64 Cheyenne nodes 8:00a-6:00p each day. To use this reservation, specify
for the queue. You can also run in “share” or “regular”.

We have a single Casper node reserved with different reservation names each day:
Wed: CMIPAP_hackathon_Day1
Thu: CMIPAP_hackathon_Day2
Fri: CMIPAP_hackathon_Day3
You can also run in “dav”.

Picking the system

For the most part, you should be able run on either Cheyenne or Casper. Start with Cheyenne.


@jukent, can you update the original post by including information about using the CMIP6 2019.10 Kernel?


I just finished updating the environment with the right software packages.

Update: Changes to reservation number

We have reserved 64 Cheyenne nodes 8:00a-6:00p each day. To use this reservation, specify
Wed: R8815540
Thu-Fri: S8808243
for the queue. You can also run in “share” or “regular”.