OpenShift AI Platform Guide: Platform Engineering, GPUs, and Air-Gapped Clusters with OpenShift
Format:
Kindle
En stock
0.84 kg
No
Nuevo
Amazon
USA
- OpenShift AI Platform Guide is a practical handbook for platform engineers who need to turn OpenShift into a real internal AI platform, not “just a Kubernetes cluster.”Starting from the CNCF platform engineering whitepaper, the book shows how to apply those ideas on OpenShift: treating the platform as a product, reducing cognitive load for app teams, and building opinionated “golden paths” instead of one-off snowflakes.From there, you’ll walk through end-to-end, production-grade scenarios:Installing OpenShift 4.20 in fully air-gapped environments with a local Quay registryConfiguring cluster-wide proxies, NFS storage, and disconnected OperatorHub catalogsDeploying and managing key operators like Node Feature Discovery and the NVIDIA GPU OperatorEnabling InfiniBand and RDMA networking with SR-IOV and the NVIDIA Network OperatorIntegrating observability with DCGM, Prometheus, and Grafana for GPU-aware monitoringUsing GitOps (OpenShift GitOps / Argo CD + GitLab) for declarative, auditable platform configRunning LLM performance benchmarks as code with Hugging Face’s Inference-Benchmarker and visualizing results with a Gradio dashboardThe guide is written in a “do this, then this” style, with YAML examples, command snippets, and explanations of why each piece matters for a modern AI platform.If you are a platform engineer, SRE, or infrastructure-minded ML practitioner responsible for OpenShift-based GPU clusters—especially in regulated or disconnected environments—this book gives you a concrete, repeatable blueprint.
IMPORT EASILY
By purchasing this product you can deduct VAT with your RUT number