Geographic Index

The spatial data used is stored in different formats NetCDF, Zarr, GeoTiff. Data are indexed in related databases to easily search for the spatial and temporal location in a given dataset. This approach requires the implementation of a database, data ingestion, SQL queries, and other stuff.
In new architectures that we imagine around the Pangeo software stack (Dask, Xarray), this type of design is not the most suitable. So we conceived designing a new library, achieving a geographic index with an embedded database. This prototype builds around the GeoHash algorithm. Indexes calculated stores in a Key/Value embedded database.


The full description of this algorithm describes on this Wikipedia page.
But to make it short, the idea is to transform the coordinates of a position defined by longitude and latitude into one number where the bits representing this coordinate interleaves into the same integer encoded on a 64-bit integer.


To truncate the value to a given precision, we remove the unnecessary bits. Each value obtained divides the earth into bounding boxes, greater or smaller, depending on the defined accuracy. This value can be encoded in a 32 base, to represent the code as a string of characters. The next figure shows the boxes obtained for a precision of 2 characters.

The library implements different functions to find the GeoHash codes neighboring a given code, to generate a GeoHash grid for a given precision.

The figure below shows an indexed SWOT track.


For the moment, the technical solution chosen to store the index generated is UnQlite. But we can imagine using RockDB in the future. The database is always a small file because of only elementary information stores for each GeoHash code generated at the desired accuracy. In our example, we have indexed satellites passes stores in NetCDF or Zarr files. In the case of NetCDF files, we stored, for each GeoHash code, the concerned file associated with the start and end indexes covering the area corresponding to the GeoHash code. For Zarr files, we stored only the indexes associated with the GeoHash code. The chosen architecture allows storing any value that can be pickled.

Example of use

def get_geohash_from_file(paths: List[str], precision: int
    ) -> Dict[bytes, Tuple[str, Tuple[int, int]]]:
    """Creation of an associative dictionary between GeoHash codes and the
    file name start and last indexes of the data selected for that code."""
    hashs = collections.defaultdict(list, {})
    for path in paths:
        points = read_positions_satellite(path)
        # Calculate geohash codes for each position read
        idents = geohash.string.encode(points, precision)
        # Calculate indices when the geohash codes are different
        indexes = np.where(idents[:-1] != idents[1:])[0] + 1
        # Create a tuple that contains the first/last index for each box
        data = zip(np.insert(indexes, 0, 0), indexes)
        # Finally create a map between geohash code and first/last index
        for code, coordinates in zip(idents[np.insert(indexes, 0, 0)][0:-1], data):
            hashs[code].append((path, coordinates))
    return hashs

index = geohash.index.init_geohash(, mode="w"),
data = get_geohash_from_file(path_to_nc_file, index.precision)

In terms of search performance, a query takes a few milliseconds for one bounding box. This performance is directly related to the performance of the chosen storage engine.

    geohash.Box(geohash.Point(-135, 0),
                geohash.Point(-90, 45))))

What do you think of this approach? Good? Bad?


This is a fascinating idea! Thanks so much for sharing. It will take me a while to digest, but I think the concept is great.

This is interesting I had a look at this a while back after @pbranson introduced me to space filling curves. I think adding a Geohash also at the time aroused great interest. Personally I think building the caching and key generation in as an xarray assessor would be amazing allowing the fast indexing of the non sorted data sets such as argo floats

Thanks for posting this! This post was the push I needed to join the community and join the discussion. I have used geohashes for spatial indexes in many applications and generally been very pleased with them. I have generally used them to build indexes for vector data but I could see some use cases for other datatypes as well. I have a few comments and questions based on my experience.


  1. It is likely my ignorance of SWOT data but can you share a bit more about what your data look like. Do you have a bunch of small files where each one covers a small section of the earth? Or is this a few large files covering large parts of the globe?

  2. What types of operations and workloads are you trying to optimize? For example, are you looking retrieve a single value for some particular point on earth? Are you looking for all files that correspond to a particular region on earth to perform some sort of large scale operation? Some other application?


  1. One of the trickiest parts of using geohashes for indexing is deciding what length of geohash you want to use. In my experience, this largely depends on the scale of your data and the queries/operations you want to perform. There generally isn’t a “one size fits all” approach that works everywhere. This often needs to be tuned for each workload and data set. Luckily there aren’t that many options and a little “back of the envelope” calculation will often get you pretty close to a good choice.

  2. Related to my first comment but one minor issue with geohashes is that their size is not uniform. The size of the geohash varies with latitude. This can sometimes be important for very specific applications like bounded nearest neighbor searches. I recently wrote a short blog post about geohash size variation you might find useful or interesting.

In general, I am very much in favor of geohash-based indexing and would encourage exploration in this direction.

1 Like

The SWOT data disseminated to users will be NetCDF format files covering half of the mission orbit (see the figure of the track above). Higher resolution products will be available, covering only a small geographical area.

These data will probably not be used on the CNES cluster to run studies. A format compatible with Dask, such as Zarr, should be used. The technical solution has not yet been decided, but there is a 95% chance that Zarr will be used.

The index will be used to calculate, for example, crossover points for this mission or with other classical altimetry missions. The index will allow us to select exactly what data is common between two tracks in a given spatial/time window. Typically +/- 10 days.

But we can use it to co-locate SWOT data with ARGO or other spatial data like S3.

The variable sizes of the GeoHash boxes are not a problem for us at the moment, as we want to use them to divide our search problem. Once the data has been selected, we have to consolidate it in any case before processing it.

Another solution I want to examine is the technical solution implemented at Uber:

1 Like

It sounds to me like using geohashes to create an index would be a reasonable approach to finding crossover points from other datasets.

We also looked at h3 but eventually settled on using geohashes. H3 has some nice properties that are useful for some applications. For example, I believe the hexagons are uniform in size and don’t get smaller near the poles. The primary reason we chose geohashes was the ability to truncate the hash to have an index at a lower resolution without recomputing the hash. For example, if you have the hash “ezs42”, every point inside that hash also falls inside of “ezs4”.

H3 cells are only “approximately contained within a parent cell”. This means that if you want to change the resolution of your index, you need to recompute the index for every item. The property is less important if you anticipate only using your index at a single resolution.

1 Like

Thanks for this @fbriol !

I recently used h3 on the nadir altimetric data in order to bin data geographically and extract tidal responses from the topex/poseidon altimetric database (with the help of pytide of course :wink: ). This was part of a TIPE project for three students. The implementation was very smooth.

What library are you actually relying upon with geohash?
I missed this when going through your post
My 2 cents about this is that the size of the community behind each of these tools should weight in your ultimate choice.

Also do you plan any interactions between the geohashing and pangeo-pyinterp?

1 Like

The goal is to have a geographical index on a set of files easily used with Dask or another HPC calculation solution.

Creating the index is part of the problem. The other problem is storing this index.

The purpose of this prototype study ( is to explore these two issues. The GeoHash codes are easy to generate, and it’s swift. Storage in a key-value database is the right solution. Competitive updating is difficult now, but I think this is not a problem. For now, I have tried two storage engines: UnQlite and LevelDB. LevelDB has too many limitations on access to be usable. Facebook has developed RocksDB to fix these and other problems. For now, UnQlite is the right solution, RocksDB, maybe for the future.

Indexes, H3 is maybe an alternative that can be easily implemented, in addition to GeoHash codes in a library. I’ll try to do some tests on this library next week.

Is there a relationship between the pyinterp library and a GeoHash library? I’m not sure.

Then I don’t know the communities around this techno stuff at all.

After some thought, I wonder if I’m not going to integrate the geohash into the pyinterp library. In the end, indexing the data is fine, but then you have to process them to select in most cases the closest points. In pyinterp, I have the RTree structure, which allows me to do this. What’s more, it makes only one library to maintain. What do you think about it?

I think it does make sense @fbriol.

My ideas are not so clear on pyinterp however: does it support chunked xarray DataArrays at the moment?

Yes you can build interpolators from xarray, but the arrays cannot be chuncked because they must be contiguous in memory.

I am under the impression there may then be a double motivation for adding geohash to pyinterp for workloads with large data arrays.
Couldn’t you indeed grouping data geographically with geohash and leverage the groups to distribute an interpolating workload ?

Yes. Also, in most of our use cases, we will search for the closest points to a given position to perform an interpolation.