

Artículo: AMZ-B0D3VPPN3K
VISION COMPUTERS, INC. PNY RTX H100 NVL - 94GB HBM3-350-400W - PNY Bulk Packaging and Accessories
Style:
H100 NVL



En stock
1.78 kg
No
Nuevo
Amazon
- The H100 NVL graphics card is designed to scale the support of large language models, such as GPT3-175B, in mainstream PCIe-based server systems, providing up to 12X the throughput performance of HGX A100 systems when configured with 8 units.
- Equipped with advanced features, including 94GB of high-speed HBM3 memory, NVLink connectivity for enhanced inter-GPU communication, and an impressive memory bandwidth of 3938 GB/sec, the H100 NVL is built for high-performance AI inference tasks.
- The card showcases a robust performance spectrum across various compute types: 68 TFLOPS for FP64, 134 TFLOPS for both FP64 Tensor Core and FP32, escalating up to 7916 TFLOPS/TOPS for FP8 and INT8 Tensor Core operations, all benefiting from sparsity optimizations.
- It enables standard mainstream servers to deliver high-performance capabilities for generative AI inference, simplifying the deployment process for partners and solution providers with fast time to market and ease of scalability.
- The H100 NVL's power efficiency is optimized with a configurable maximum power consumption ranging between 2x 350-400W, supporting extensive computational tasks without excessive power usage.
Producto no disponible
Este producto no está permitido por la aduana del país en categoria 4x4
Conoce más detalles
Memory clock: 2,619 MHz | Memory type: HBM3 | Memory size: 94 GB | Memory bus width 6,016 bits Peak memory bandwidth 3,938 GB/s