Just to add, this seems to me like the panacea for netcdf generally, and providers would simply point us to their maintained virtual Zarr, and that could be used directly or as a way to sync “locally” the subset required , rather than “us” generating the json. Certainly we’ll be recasting our disk and object storage to include this mech now.
Also, is there kerchunk effort into HDF4? That was the real legacy that I had no way to access remotely, while everything else seems well covered now by various protocols. Kerchunk makes netcdf faster but it’s not actually needed for access per se, whereas with HDF4 there’s no remote access at all and so I don’t get why it’s not in the kerchunk list (??). Maybe I have a terminology or other confusion here