Supercomputing for Artificial Intelligence: Foundations, Architectures, and Scaling Deep Learning Workloads
En stock
1.58 kg
Sí
Nuevo
Amazon
- The foundations of supercomputing and its role in AI workloads
- Practical GPU programming with CUDA and distributed systems
- Parallel programming with MPI on modern clusters
- Efficient training of neural networks, CNNs, and Transformers
- Performance optimization for deep learning at scale
- Distributed training with PyTorch DistributedDataParallel (DDP)
- Building and scaling LLMs using real biomedical and NLP datasets
- Jupyter, Google Colab, and Hugging Face workflows
- Deployment and inference strategies for modern LLMs
- Instructors looking for practical material for AI and HPC courses
- Students and professionals wanting to learn how to run AI at scale
- Engineers transitioning from standard AI workflows to distributed environments and seeking system-level judgments
- Researchers working on LLMs and interested in reproducible pipelines
- 800+ pages of real-world content tested in supercomputing classrooms
- Hands-on examples with PyTorch, CUDA, MPI, and SLURM
- Full GitHub access with ready-to-run scripts and datasets
- Workflows adapted for Google Colab, and HPC clusters
IMPORT EASILY
By purchasing this product you can deduct VAT with your RUT number