You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To minimize number of questions CRUFT template demands answered we should:
For figuring out docker-compose or docker compose run (inside try-except-finally) with subprocess running 'which docker compose' and conditionally on the first test failing 'which docker-compose'. We should run the test in exactly that order as a standalone docker-compose is being phased out by docker in favor of the CLI verb 'compose'.
The CPU or GPU version of TensorFlow and PyTorch question makes sense ONLY if there is a GPU present in the system. It would be reasonable to assume that CPU is always present, unless someone runs an Abacus or Flinstone-PC, which Dioptra, most likely, doesn't support 😊. How to check the nVidia GPU presence without TensorFlow or PyTorch installed is working with use of subprocess, as described here. HB and DC tested that the method works against GTX 1080 on Windows, RTX 2080Ti on U-24.04 Linux and Mobile Quadro T2000 with Max-Q.
AMD GPU check should work for both Radeon and Instinct(MI) (check it please, if you have access to AMD Radeon and especially Instinct):
To minimize number of questions CRUFT template demands answered we should:
For figuring out docker-compose or docker compose run (inside try-except-finally) with subprocess running 'which docker compose' and conditionally on the first test failing 'which docker-compose'. We should run the test in exactly that order as a standalone docker-compose is being phased out by docker in favor of the CLI verb 'compose'.
The CPU or GPU version of TensorFlow and PyTorch question makes sense ONLY if there is a GPU present in the system. It would be reasonable to assume that CPU is always present, unless someone runs an Abacus or Flinstone-PC, which Dioptra, most likely, doesn't support 😊. How to check the nVidia GPU presence without TensorFlow or PyTorch installed is working with use of subprocess, as described here. HB and DC tested that the method works against GTX 1080 on Windows, RTX 2080Ti on U-24.04 Linux and Mobile Quadro T2000 with Max-Q.
The text was updated successfully, but these errors were encountered: