SKU/Artículo: AMZ-B0G48JRRMF

Hands-On LLM Serving and Optimization: Hosting LLMs at Scale

Detalles del producto
Disponibilidad:
En stock
Peso con empaque:
1.54 kg
Devolución:
No
Condición
Nuevo
Producto de:
Amazon
Viaja desde
USA

Sobre este producto
  • Large language models (LLMs) are rapidly becoming the backbone of AI-driven applications. Without proper optimization, however, LLMs can be expensive to run, slow to serve, and prone to performance bottlenecks. As the demand for real-time AI applications grows, along comes Hands-On Serving and Optimizing LLM Models, a comprehensive guide to the complexities of deploying and optimizing LLMs at scale.In this hands-on book, authors Chi Wang and Peiheng Hu take a real-world approach backed by practical examples and code, and assemble essential strategies for designing robust infrastructures that are equal to the demands of modern AI applications. Whether you're building high-performance AI systems or looking to enhance your knowledge of LLM optimization, this indispensable book will serve as a pillar of your success.Learn the key principles for designing a model-serving system tailored to popular business scenariosUnderstand the common challenges of hosting LLMs at scale while minimizing costsPick up practical techniques for optimizing LLM serving performanceBuild a model-serving system that meets specific business requirementsImprove LLM serving throughput and reduce latencyHost LLMs in a cost-effective manner, balancing performance and resource efficiency
AR$333.838
49% OFF
AR$171.203

IMPORT EASILY

By purchasing this product you can deduct VAT with your RUT number

AR$333.838
49% OFF
AR$171.203

Pagá fácil y rápido con Mercado Pago o MODO

Llega en más de 28 días hábiles
con envío
Tienes garantía de entrega
Este producto viaja de USA a tus manos en