As someone who recently discovered kerchunk and has to constantly reference the Kerchunk cookbook, I am wondering whether it’s a good idea (or if it’s even possible) to have kerchunk as a simple toggle kwarg in xr.open_dataset? Right now I feel like there’s a lot of steps to remember to do (1. generate reference files, 2. wrap fsspec on those reference files, 3. pass that to xr.open_dataset–if these steps are even accurate)
I feel like it could be done, but I’ve only used kerchunk a teeny bit for examples so I don’t have too much context.
I imagine it can be used like xr.open_dataset("unoptimized_file.nc", kerchunk=True) and it would generate the reference files in the current directory if it doesn’t exist–or use the existing generated reference files. And, depending on the engine used, it would use the appropriate kerchunk backend like xr.open_dataset("unoptimized_file.grib", engine="cfgrib", kerchunk=True)
As an analogy, I’m thinking of how datashader can be used with hvplot by setting df.hvplot(datashade=True) and I am hoping that kerchunk can be that simple, but again I haven’t used kerchunk extensively.
It sounds like what you’re suggesting Tom is simpler than what you originally suggested Andrew - just eliminating the fsspec step but not actually running kerchunk to generate the references automatically like Andrew suggests. Are you thinking that automatically running kerchunk called from an xarray backend would be “too auto-magical”?
it would generate the reference files in the current directory if it doesn’t exist–or use the existing generated reference files.
entirely. I was just thinking about the case where you already have references.
Doing that automatically does feel pretty magical… Lots of potential complications around things like reading files from remote filesystems, but maybe still worth doing.
I have noticed that cfgrib outputs a .idx file in the local directory automatically.
Doing that automatically does feel pretty magical… Lots of potential complications around things like reading files from remote filesystems, but maybe still worth doing.
I believe we should identify the most common use-case and should support that. Then for other cases, the user can drop down to the lower level.
Again analogous to hvplot covering most use-cases → holoviews → hooks bokeh/matplotlib → render as bokeh/matplotlib figures.
Making the references locally seems like a form of caching - but it doesn’t store too much data. It seems like it should be doable for common cases, and generally if the correct set of arguments to make the references is stored somewhere, say a catalog.
If you made a backend that understood concat_dim via open_mfdataset (which already would require changes to xarray’s backend entrypoint base class I think), then you would also find that open_mfdataset(engine='kerchunk') could only deal with a subset of the cases open_mfdataset can normally deal with: those with regular chunking. It would be another rmotivation to have Zarr support irregular chunking.
EDIT: It seems there are other scenarios which xarray.open_mfdataset’s combining algorithms can deal with which kerchunk currently cannot.
We’ll be presenting our approach to making kerchunk usage simpler at next week’s Pangeo Showcase
The approach requires the use of a backend database to store the references, so it might not meet every use case. But it certainly improves the user experience and solves some consistency challenges!