Workflow fails when Dask worker memory budget is exceeded #1246
-
My Machine Learning workflow execution fails during the preprocessing stage of the workflow. Having started the covalent server with
It seems like the issue is with Dask worker memory being exceeded. I have two main questions:
|
Beta Was this translation helpful? Give feedback.
Answered by
venkatBala
Sep 15, 2022
Replies: 1 comment 2 replies
-
The
|
Beta Was this translation helpful? Give feedback.
2 replies
Answer selected by
FyzHsn
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The
Dask
dashboard will help get more details on the cluster workers. You will however need to installbokeh
. The following steps would be helpfulpip install bokeh
localhost:8787
(In the workers tab you can inspect how much memory each Dask worker is running with)covalent cluster
CLI a.k.acovalent cluster --info
UI screenshots