SOM System-On-Modules - SOM Google Edge TPU ML Compute Accelerator, Integrate The Edge TPU into Legacy and New Systems Using a Standard M.2-2280-B-M-S3 (B/M Key)
En stock
0.20 kg
Sí
Nuevo
Amazon
USA
- Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. See more performance benchmarks. Works with Debian Linux Integrates with any Debian-based Linux system with a compatible card module slot. Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge.
IMPORT EASILY
By purchasing this product you can deduct VAT with your RUT number
Conoce más detalles
Connector: M.2-2280-B-M-S3 (B/M Key) Google Edge TPU coprocessor 22.00 x 80.00 x 2.35 mm Supports TensorFlow Lite Works with Debian Linux