@scottyhq and @rabernat , the altimetry example worked now. Thanks very much. The Pangeo is really fantastic and useful.
Now, I try to run the example LLC4320 in Pangeo which is a much larger data-set. Thus, the dask is needed.
The codes before the Launch Dask Cluster
section run good without error. However, when I run:
from dask_kubernetes import KubeCluster
from dask.distributed import Client
cluster = KubeCluster()
cluster.adapt(minimum=1, maximum=20)
client = Client(cluster)
cluster
I see these wrong messages:
---------------------------------------------------------------------------
ValueError: Worker pod specification not provided. See KubeCluster docstring for ways to specify workers
After googling these messages, I run:
from dask_kubernetes import KubeCluster, make_pod_spec
pod_spec = make_pod_spec(image='daskdev/dask:latest',
memory_limit='4G', memory_request='4G',
cpu_limit=1, cpu_request=1)
cluster = KubeCluster(pod_spec)
cluster.scale(10) # specify number of workers explicitly
cluster.adapt(minimum=1, maximum=100) # or dynamically scale based on current workload
But seems this is not the solution, and error message raised:
---
ClientConnectorError: Cannot connect to host 10.12.0.1:443 ssl:default [Connect call failed ('10.12.0.1', 443)]
I know this problem is not related the Pangeo side but the dask side. Forgive me also a beginner of Dask and Kubercluster. Perhaps veterans here know how to fix this quickly.