I have a question about xarray.open_zarr. My understanding is that that for timeseries data saved in zarr format, when opened xarray reads the .zmetadata and the contents of the time variable and load those into memory.
My question is, I have a very high density data (seconds) resolution for 5 - 6 years. For some, this can result to time variable of size 3GB+. Is there a way where I can distribute this initial read to dask workers? Or is there a known way to only grab a smaller subset of the data at open_zarr?
Thanks in advance. Any help/comments are much appreciated.