I’ve recently been taking a deep dive into xarray and map_block this has been both exciting and painful. Exciting because it offers so much daskness painful as errors often happen deep into the computation, usually when I wake to find my job failed while I was dreaming of results… anyway this is what I have been trying to do.
Objective: To run a range dependant acoustic propagation model using temp and salinity from a hydrodynamic model
Why: To see how the model forcing affects the estimation of propagation
- extract a slice through the water column for a given time, lat, long along a bearing with a given length (curvilinear grid so I use xESMF)
- Calculate sound speed (fill the missing bits etc.)
- Set the Frequency and source depth
- Add an absorbing bottom to the acoustic model that stops energy reflecting off the bottom (frequency dependant)
- Run the model (pyRAM)
wait a long time… then plot the output!
I’ve written accessor for the models netcdf files:
shoc is the model accessor
glider is my ocean glider accessor
ram is my acoustic model acessor
#get the times and locations of the glider profiles p=dsglider.sel(TI=model_run.time,method='nearest') #extract the slices from the model (get positions from a glider and headings ) slices =model_run.shoc.soundslicexr(p.time,p.LON.values,p.LAT.values, np.dstack((p.BHOG,p.FHOG)),30000,50,maxdepth=125,depthstep=1,runname= runname) slices=slices.ram.setfrequency(np.array([6000.,1000.]),np.array([5.,100.])) slices=slices.ram.addbottomxr(125,thickness=100,attn=2) slices = slices.ram.runxr() tlos = slices.compute()
So it took a lot of pain to get this all working using map_block and the accessor structure, it kind of left me wondering if I had gone down a rabbit hole.
Things that I like:
- The user interface is fairly clean
- Delayed operations make for a quick build of the pipeline
- Dask does a really good job of farming out the work.
- Learned heaps about xarray dask and python
Things that I didn’t like:
- The first attempt I used delayed functions and stacked the output myself (loads of code) Plus heaps of warnings about large objects in the dag
- Choosing a block size: block size too small easy to write one slice delivered to the acoustic model at a time but then as soon as the number of slices got more than 200ish the number of processes got >100K and dask choked. (rewrite to run multiple slices and concatenate)
- very hard to allow the user to set arbitrary chunking requires a fair bit of code to work out what dimensions are still there etc.
- Keeping track of objects that are going to be used at a later date
- unify_chunks() does not often fix the problem
I guess the question is should I have just used slurm job queue and done it with standard python or was this really worth doing? A job that takes 1 hour plus on a 100 node cluster is that really a good use case for Pangeo?
I’d be keen to hear others’ views on map_block + accessor…