Skip to content

dask.distributed cluster on HLRN Berlin

I can't start a dask.distributed cluster on HLRN Berlin.

Start cluster

def start_cluster(queue='large96:shared',walltime="00:15:00",cores=1,processes=1,mem="10GB"):
    cluster = SLURMCluster(
    queue=queue,
    walltime=walltime,
    cores=cores,
    processes=processes,
    memory=mem,
    interface="ib0",
    local_directory="/tmp/",
    job_extra=['-A shkpwagn'])
    client = Client(address=cluster)
    cluster.scale(1)
    return cluster,client

gives the following error message:

Error

distributed.nanny - INFO -         Start Nanny at: 'tcp://10.246.8.3:41045'
distributed.worker - INFO -       Start worker at:     tcp://10.246.8.3:45101
distributed.worker - INFO -          Listening to:     tcp://10.246.8.3:45101
distributed.worker - INFO -          dashboard at:           10.246.8.3:34645
distributed.worker - INFO - Waiting to connect to:   tcp://10.246.101.3:34147
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                         20
distributed.worker - INFO -                Memory:                  100.00 GB
distributed.worker - INFO -       Local Directory:       /tmp/worker-j4jz3vv6
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Waiting to connect to:   tcp://10.246.101.3:34147
distributed.worker - INFO - Waiting to connect to:   tcp://10.246.101.3:34147
distributed.worker - INFO - Waiting to connect to:   tcp://10.246.101.3:34147
distributed.worker - INFO - Waiting to connect to:   tcp://10.246.101.3:34147
distributed.worker - INFO - Waiting to connect to:   tcp://10.246.101.3:34147
distributed.nanny - INFO - Closing Nanny at 'tcp://10.246.8.3:41045'
distributed.worker - INFO - Stopping worker at tcp://10.246.8.3:45101
distributed.worker - INFO - Closed worker has not yet started: None
distributed.dask_worker - INFO - End worker
Traceback (most recent call last):
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/distributed/node.py", line 173, in wait_for
    await asyncio.wait_for(future, timeout=timeout)
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/asyncio/tasks.py", line 449, in wait_for
    raise futures.TimeoutError()
concurrent.futures._base.TimeoutError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/distributed/cli/dask_worker.py", line 440, in <module>
    go()
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/distributed/cli/dask_worker.py", line 436, in go
    main()
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/distributed/cli/dask_worker.py", line 422, in main
    loop.run_sync(run)
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/tornado/ioloop.py", line 532, in run_sync
    return future_cell[0].result()
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/distributed/cli/dask_worker.py", line 416, in run
    await asyncio.gather(*nannies)
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/asyncio/tasks.py", line 630, in _wrap_awaitable
    return (yield from awaitable.__await__())
  File "/home/shkpwagn/miniconda3/envs/py3_std/lib/python3.7/site-packages/distributed/node.py", line 178, in wait_for
    type(self).__name__, timeout
concurrent.futures._base.TimeoutError: Nanny failed to start in 60 seconds

I never tried to start a cluster in Berlin before but the recipe above works in Goettingen.
Did anyone encounter the issue or managed to start a cluster in Berlin?