-
Notifications
You must be signed in to change notification settings - Fork 1
Compute Requirements
Welcome to the ibm-skills-ai-colab-sessions wiki!
Purpose: To support the hardware architectural and technical requirements for running Jupyter Notebooks locally and remotely, with the view define, stand up and execute Machine Learning models and define areas of constraints/limitations for such.
The preferred remote compute platform is Google Research's Colaboratory: .
To run these repository notebooks, with zero configuration, go to Quick Start
Local hardware requirements for Machine Learning Frameworks
- SciKit-Learn
- HuggingFace's Accelerate, Transfomers
- TensorFlow & Keras.io
- PyTorch
Framework/Library | CPU | RAM | Storage | GPU | Python Version |
---|---|---|---|---|---|
SciKit-Learn | 1 GHz dual-core | 4 GB | 5 GB | Not required | 3.8+ |
Hugging Face (Accelerate & Transformers) |
2 GHz quad-core | 8 GB | 10 GB | Not required | 3.7+ |
TensorFlow/Keras | 1.5 GHz dual-core | 4 GB | 8 GB | Optional | 3.8-3.11 |
PyTorch | 1.5 GHz dual-core | 4 GB | 5 GB | Not required | 3.8+ |
Framework/Library | CPU | RAM | Storage | GPU | Python Version |
---|---|---|---|---|---|
SciKit-Learn | 2.5 GHz quad-core+ | 16 GB+ | 20 GB+ SSD | Not required | 3.8+ |
Hugging Face (Accelerate & Transformers) |
3.5 GHz octa-core+ | 32 GB+ | 50 GB+ SSD | NVIDIA, 8 GB+ VRAM | 3.7+ |
TensorFlow/Keras | 3.5 GHz quad-core+ | 16 GB+ | 50 GB+ SSD | NVIDIA, 4 GB+ VRAM | 3.8-3.11 |
PyTorch | 3.5 GHz quad-core+ | 16 GB+ | 20 GB+ SSD | NVIDIA, 4 GB+ VRAM | 3.8+ |
Notes:
- Requirements may vary based on the size and complexity of models and datasets.
- GPU requirements are generally for NVIDIA GPUs with CUDA support.
- macOS has limited GPU support for PyTorch and TensorFlow.
- For large models or datasets, more RAM and GPU memory will significantly improve performance.
Here's a concise list of the minimum and optimum hardware requirements to run scikit-learn using Jupyter Notebook on Mac, Linux, and Windows:
Minimum Requirements (all platforms):
- CPU: 1 GHz dual-core processor
- RAM: 4 GB
- Storage: 5 GB free space
- Python 3.8 or higher
Optimum Requirements (all platforms) 1, 2, 3:
- CPU: 2.5 GHz quad-core processor or better 2
- RAM: 16 GB or more 2
- GPU: 3
- Storage: 20 GB or more SSD
- Python 3.8 or higher
Notes:
- 1 These requirements are general guidelines and may vary depending on the size and complexity of your datasets and models.
- 2 For large datasets or complex models, more RAM and a faster CPU will significantly improve performance.
- 3 GPU acceleration is not natively supported by scikit-learn, so a dedicated GPU is not necessary unless you're using other libraries that can utilize it.
A concise list of the minimum and optimum hardware requirements to run Hugging Face Accelerate and Transformers:
Minimum Requirements:
- CPU: 2 GHz quad-core processor
- RAM: 8 GB
- Storage: 10 GB free space
- Python 3.7 or higher
Optimum Requirements:
- CPU: 3.5 GHz octa-core processor or better
- RAM: 32 GB or more 5/sup>
- GPU: NVIDIA GPU with at least 8 GB VRAM (e.g., RTX 2070 or better) 5, 6, 7
- Storage: 50 GB or more SSD
- Python 3.7 or higher
Notes:
- 4 These requirements can vary significantly depending on the size and complexity of the models you're working with.
- 5 For large language models (e.g., GPT-3, T5-large), more RAM and GPU memory are crucial.
- 6 While it's possible to run Transformers on CPU, a CUDA-capable NVIDIA GPU is highly recommended for reasonable performance, especially for training.
- 7The Accelerate library is designed to help run models on various hardware setups, including multi-GPU and TPU configurations.
A list of the minimum and optimum hardware requirements to run TensorFlow, TensorFlow Keras (version 2.12), and Keras.io v3:
Minimum Requirements:
- CPU: 1.5 GHz dual-core processor
- RAM: 4 GB
- Storage: 8 GB free space
- Python 3.8-3.11 (for TensorFlow 2.12)
- GPU: Optional, but recommended for better performance
Optimum Requirements:
- CPU: 3.5 GHz quad-core processor or better
- RAM: 16 GB or more 13
- GPU: NVIDIA GPU with at least 4 GB VRAM (e.g., GTX 1060 or better) 9, 10, 13
- Storage: 50 GB or more SSD
- Python 3.8-3.11 (for TensorFlow 2.12)
Notes:
- 8 These requirements can vary based on the size and complexity of your models and datasets.
- 9 For deep learning tasks, a CUDA-capable NVIDIA GPU is highly recommended for significantly faster training and inference.
- 10 TensorFlow 2.12 supports CUDA 11.2 and cuDNN 8.1 or higher.
- 12 Keras.io v3 is a multi-backend Keras that can work with TensorFlow, JAX, or PyTorch as the backend. The requirements might slightly vary depending on which backend you choose.
- 13 For large models or datasets, more RAM and GPU memory will be beneficial.
A list of the minimum and optimum hardware requirements to run PyTorch for macOS, Linux, and Windows:
Minimum Requirements (all platforms):
- CPU: 1.5 GHz dual-core processor
- RAM: 4 GB
- Storage: 5 GB free space
- Python 3.8 or higher
Optimum Requirements (all platforms):
- CPU: 3.5 GHz quad-core processor or better
- RAM: 16 GB or more 16
- GPU: NVIDIA GPU with at least 4 GB VRAM (e.g., GTX 1060 or better) 14, 16
- Storage: 20 GB or more SSD
- Python 3.8 or higher
macOS:
- GPU support is limited. Apple Silicon (M1/M2) Macs can use MPS (Metal Performance Shaders) backend for GPU acceleration.
- Intel Macs don't have native GPU support for PyTorch.
Linux:
- Best GPU support, especially for NVIDIA GPUs with CUDA.
- Some support for AMD GPUs through ROCm, but more limited than NVIDIA.
Windows:
- Good support for NVIDIA GPUs with CUDA.
- No official support for AMD GPUs.
- 14 GPU is optional but highly recommended for deep learning tasks.
- 15 Requirements may vary based on the size and complexity of your models and datasets.
- 16 For large models or datasets, more RAM and GPU memory will significantly improve performance.
- Anthropic Claude, Sonnet 3.5 (2024) (Free), "Hardware Requirements for (X Library) on Mac, Linux, Windows", Last Accessed: July 2029 https://claude.ai/chat/34691fca-04a4-499d-8871-fab59dbef0ab
- The stats and requirements were generated for efficiency and AI drive search/pull of information, as well as summation and formatting.
- Any issues, mistakes and or hallucination, please open a [ticket].
- These stats will be manually checked, best efforts.