-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPI-Operator run example failed #598
Comments
Does the example without GPUs work fine? https://github.com/kubeflow/mpi-operator/blob/master/examples/v2beta1/pi/pi.yaml |
Alternatively, did you install the nvidia drivers on the nodes? |
https://github.com/kubeflow/mpi-operator/blob/master/examples/v2beta1/pi/pi.yaml:This example is successful |
After installing it and I have tested it, the GPU is working normally.he first point I mentioned is that the task cannot use multi-node GPU to work. When replicas: 2, the pod cannot be started. When replicas: 1, the pod can work normally. start up |
Uhm... interesting. Although it sounds like some networking problems that you need to work out with your provider. I don't think it's related to mpi-operator. |
@q443048756 It would be the cuda thing in the default image |
It sounds reasonable. Thank you for the helping. |
It is the same failure, maybe it is time to update the example. |
I setup the mpi-operator v0.4.0
and try to deploy the example:
mpi-operator-0.4.0/examples/v2beta1/tensorflow-benchmarks/tensorflow-benchmarks.yaml
My k8s has three node, each node has a 3060 graphics card
but it seem can not run it correctly:
1、Using the default configuration, I don't see any pods starting, it should be a failure
apiVersion: kubeflow.org/v2beta1
kind: MPIJob
metadata:
name: tensorflow-benchmarks
spec:
slotsPerWorker: 1
runPolicy:
cleanPodPolicy: Running
mpiReplicaSpecs:
Launcher:
replicas: 1
template:
spec:
containers:
- image: mpioperator/tensorflow-benchmarks:latest
name: tensorflow-benchmarks
command:
- mpirun
- --allow-run-as-root
- -np
- "2"
- -bind-to
- none
- -map-by
- slot
- -x
- NCCL_DEBUG=INFO
- -x
- LD_LIBRARY_PATH
- -x
- PATH
- -mca
- pml
- ob1
- -mca
- btl
- ^openib
- python
- scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py
- --model=resnet101
- --batch_size=64
- --variable_update=horovod
Worker:
replicas: 2
template:
spec:
containers:
- image: mpioperator/tensorflow-benchmarks:latest
name: tensorflow-benchmarks
resources:
limits:
nvidia.com/gpu: 1
2、When replicas: 1, the pod starts normally,I suspect that the task cannot call the GPU across nodes.
apiVersion: kubeflow.org/v2beta1
kind: MPIJob
metadata:
name: tensorflow-benchmarks
spec:
slotsPerWorker: 1
runPolicy:
cleanPodPolicy: Running
mpiReplicaSpecs:
Launcher:
replicas: 1
template:
spec:
containers:
- image: mpioperator/tensorflow-benchmarks:latest
name: tensorflow-benchmarks
command:
- mpirun
- --allow-run-as-root
- -np
- "1"
- -bind-to
- none
- -map-by
- slot
- -x
- NCCL_DEBUG=INFO
- -x
- LD_LIBRARY_PATH
- -x
- PATH
- -mca
- pml
- ob1
- -mca
- btl
- ^openib
- python
- scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py
- --model=resnet101
- --batch_size=64
- --variable_update=horovod
Worker:
replicas: 1
template:
spec:
containers:
- image: mpioperator/tensorflow-benchmarks:latest
name: tensorflow-benchmarks
resources:
limits:
nvidia.com/gpu: 1
3、After the pod is started, the launcher reports an error
2023-10-25 09:53:08.464568: E tensorflow/c/c_api.cc:2184] Internal: CUDA runtime implicit initialization on GPU:0 failed. Status: device kernel image is invalid
Traceback (most recent call last):
File "scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py", line 73, in
app.run(main) # Raises error on invalid flags, unlike tf.app.run()
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "scripts/tf_cnn_benchmarks/tf_cnn_benchmarks.py", line 61, in main
params = benchmark_cnn.setup(params)
File "/tensorflow/benchmarks/scripts/tf_cnn_benchmarks/benchmark_cnn.py", line 3538, in setup
with tf.Session(config=create_config_proto(params)) as sess:
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py", line 1586, in init
super(Session, self).init(target, graph, config=config)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py", line 701, in init
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: device kernel image is invalid
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[12892,1],0]
Exit code: 1
The text was updated successfully, but these errors were encountered: