As someone who recently discovered kerchunk and has to constantly reference the Kerchunk cookbook, I am wondering whether it’s a good idea (or if it’s even possible) to have kerchunk as a simple toggle kwarg in xr.open_dataset? Right now I feel like there’s a lot of steps to remember to do (1. generate reference files, 2. wrap fsspec on those reference files, 3. pass that to xr.open_dataset–if these steps are even accurate)
I feel like it could be done, but I’ve only used kerchunk a teeny bit for examples so I don’t have too much context.
I imagine it can be used like xr.open_dataset("unoptimized_file.nc", kerchunk=True) and it would generate the reference files in the current directory if it doesn’t exist–or use the existing generated reference files. And, depending on the engine used, it would use the appropriate kerchunk backend like xr.open_dataset("unoptimized_file.grib", engine="cfgrib", kerchunk=True)
As an analogy, I’m thinking of how datashader can be used with hvplot by setting df.hvplot(datashade=True) and I am hoping that kerchunk can be that simple, but again I haven’t used kerchunk extensively.
It sounds like what you’re suggesting Tom is simpler than what you originally suggested Andrew - just eliminating the fsspec step but not actually running kerchunk to generate the references automatically like Andrew suggests. Are you thinking that automatically running kerchunk called from an xarray backend would be “too auto-magical”?
it would generate the reference files in the current directory if it doesn’t exist–or use the existing generated reference files.
entirely. I was just thinking about the case where you already have references.
Doing that automatically does feel pretty magical… Lots of potential complications around things like reading files from remote filesystems, but maybe still worth doing.
I have noticed that cfgrib outputs a .idx file in the local directory automatically.
Doing that automatically does feel pretty magical… Lots of potential complications around things like reading files from remote filesystems, but maybe still worth doing.
I believe we should identify the most common use-case and should support that. Then for other cases, the user can drop down to the lower level.
Again analogous to hvplot covering most use-cases → holoviews → hooks bokeh/matplotlib → render as bokeh/matplotlib figures.
Making the references locally seems like a form of caching - but it doesn’t store too much data. It seems like it should be doable for common cases, and generally if the correct set of arguments to make the references is stored somewhere, say a catalog.
If you made a backend that understood concat_dim via open_mfdataset (which already would require changes to xarray’s backend entrypoint base class I think), then you would also find that open_mfdataset(engine='kerchunk') could only deal with a subset of the cases open_mfdataset can normally deal with: those with regular chunking. It would be another rmotivation to have Zarr support irregular chunking.
EDIT: It seems there are other scenarios which xarray.open_mfdataset’s combining algorithms can deal with which kerchunk currently cannot.
We’ll be presenting our approach to making kerchunk usage simpler at next week’s Pangeo Showcase
The approach requires the use of a backend database to store the references, so it might not meet every use case. But it certainly improves the user experience and solves some consistency challenges!
The aim is to make generating references use xarray syntax instead:
ds = xr.open_mfdataset(
'/my/files*.nc',
engine='kerchunk', # kerchunk registers an xarray IO backend that returns zarr.Array objects
combine='nested', # 'by_coords' would require actually reading coordinate data
parallel=True, # would use dask.delayed to generate reference dicts for each file in parallel
)
ds # now wraps a bunch of zarr.Array / kerchunk.Array objects directly, not numpy/dask arrays
ds.kerchunk.to_json('newstore.zarr') # kerchunk defines an xarray accessor that extracts the zarr arrays and serializes them
You would then still need to open the data the normal way from your new references, but the actual generation of the references becomes much more intuitive.
awesome, this disucssion is what I was looking for a long time without knowing what to ask for!
creating individual jsons and combining them is fine for me for now while I learn more about kerchunk but I’m following along and keen to discuss prospects. thanks!
excellent, thanks for the guidance, I didn’t realize that VirtualiZarr was the next stage/s
I asked a question on gdal-dev (that was entirely off-base in terms of how to proceed), but the reply by dev Even was very helpful and I think I can probably contribute that on the GDAL side:
Just to add, this seems to me like the panacea for netcdf generally, and providers would simply point us to their maintained virtual Zarr, and that could be used directly or as a way to sync “locally” the subset required , rather than “us” generating the json. Certainly we’ll be recasting our disk and object storage to include this mech now.
Also, is there kerchunk effort into HDF4? That was the real legacy that I had no way to access remotely, while everything else seems well covered now by various protocols. Kerchunk makes netcdf faster but it’s not actually needed for access per se, whereas with HDF4 there’s no remote access at all and so I don’t get why it’s not in the kerchunk list (??). Maybe I have a terminology or other confusion here
Thanks for sharing this! I think the exact format of the references is still in flux here - kerchunk’s json/parquet format exists but we’re having active discussions about how exactly we could take this to it’s logical conclusion and represent the manifest in zarr itself upstream. I.e. making Zarr a “SuperFormat”. This is the issue to follow: Manifest storage transformer · Issue #287 · zarr-developers/zarr-specs · GitHub.
Also @ahuang11 on your original question, see this comment from today:
Basically I think that once we have storage manifest transformers upstream in zarr-python, then we could turn virtualizarr.ManifestArrays directly into zarr.Arrays. Then we could set it up such that you could use an engine='virtualizarr' kwarg to xr.open_dataset to basically achieve what you’re asking for above.
thanks! I just find it funny that I couldn’t find any mention of it, like maybe it was not possible at all … the NASA stores for HDF4 are immense - but probably it’s just an important data source in my circles (and probably not as important as it once was, for L1 sea ice and ocean colour).