Puzzling S3 xarray.open_zarr latency

Thanks for this interesting reproducible example @emiliom!

xarray.open_zarr takes about 1 second on one of them and ~ 4 seconds on the other (using fsspec & s3fs 0.5.1; 30-50 seconds with s3fs 0.4.2

It is actually fsspec.get_mapper() that is the main change in speed between s3fs 0.4.2 and 0.5.1 behind the scenes here. newer versions use ‘Async’ which can lead to a 10x speedup opening zarr. Described in this post Understanding Async - #4 by martindurant

Can anyone illuminate this? How can I tweak the chunking in #2 to keep its latency much closer to #1?

It would be great to get @rsignell or @martindurant’s insight on this. A key difference I notice without digging into details is your “object to chunk ratio” for each case (1 vs 2). Perhaps get_mapper() is not just reading the metadata?

I’ll just also note that the single timing block reading from S3 can be sometimes misleading. I often observe 2x variability for the same code (this is running in the same datacenter confirmed by aws s3api get-bucket-location --bucket snowmodel from aws-uswest2.pangeo.io. I ran your code a few times and observe the following timings for reading geo_ds (same versions):

CPU times: user 769 ms, sys: 20.8 ms, total: 790 ms
Wall time: 2.28 s 

CPU times: user 929 ms, sys: 69.6 ms, total: 999 ms
Wall time: 1.35 s

CPU times: user 942 ms, sys: 78.4 ms, total: 1.02 s
Wall time: 1.53 s

CPU times: user 722 ms, sys: 36.5 ms, total: 759 ms
Wall time: 1.16 s
1 Like