Hi , I’ve already post this question in the Dask Discourse however I think some peaple here could have the solution as your familiar with the usage of Dask with geospatial data.
I’m having a issue to distribute a deep learning process over a big raster image (i’m using xarray as a dask interface).
I use a big image of about 1GB in size and I’m reading this image using xarray and chunk it in 10 different chunk of 10MB each.
Then for each chunk I’m doing a deep learning magic using ONNX Runtime. To do so I use xarray apply_ufunc
(dask_gufunc
wrapper, like a map_block
) which apply my inference function predict on each chunk.
At this point I’m getting a nice graph of computation with 10 task, one for each chunk.
However my predict
function is using a lot of ram, for a chunk of 100MB I’m using at the peak 300MB of RAM (because of convolution and other fancy stuff going on in it). This lead to a high memory usage and a crash while trying to execute with few ram, at the start of the computation each worker get associated with too many task because Dask think it will handle it but finally it explode.
So my question is: How can I communicate to the Dask ressource manager to NOT give too many predict
task to my workers ?
Currently, I’m on a LocalCluster
and I want to ensure that my work could run with approx 3GB of RAM.
I found that using the Worker Resources section, I can specify a resources for a specific summit (ie, client.submit(process, d, resources={'MEMORY': "200MB"})
) or with a dask.annotate
context. However it seems that I can’t specify constraints on apply_ufunc
task. EDIT: It seems to work using the dask.annotate
context (see post behind), but the scheduler is not handling it well anyway.
Notebook Version of my examble: notebook version