Artículo: AMZ-B0FBWDLV3C

Yahboom

Yahboom AI Embodied Intelligence Robot Dog for Raspberry Pi, 15 Joint Programming Bionic Robot Dog (with RPi CM4)

Color:

With Rpi Cm4

With Rpi Cm4

With Rpi Cm5

Detalles del producto
Disponibilidad
En stock
Peso con empaque
1.28 kg
Devolución
Condición
Nuevo
Producto de
Amazon
Viaja desde
USA

Sobre este producto
  • Redefining smart companions: When robot dogs "learn to think" DOGZILLA-Lite is the world's first educational robot dog that integrates multimodal large models + embodied intelligence. It has a built-in Raspberry Pi module and supports multiple A-vision functions such as face detection and object recognition. The joints use 2.3KG·CM bus serial port servos, which can achieve omnidirectional movement, six-dimensional posture control, posture stability and multiple motion gaits. It is not only a walking robot, but also an AI partner that can understand images, voices, environments, and make autonomous decisions - from executing simple instructions such as "finding red blocks" to completing complex tasks such as "recognizing the owner and dancing", opening a robot interaction revolution. Combining the robot arm and the body movements to complete complex embodied intelligent applications is an ideal platform for you to explore AI and robotics technology. Safety Instructions The robot arm is designed to be lightweight and is limited to grabbing standard EVA blocks/balls. Do not use it for other heavy objects.
$1.106,61
49% OFF
$567,49

Producto no disponible

Este producto no está permitido por la aduana del país en categoria 4x4

Este producto viaja de USA a tus manos en

Conoce más detalles

【The first AI LLM + embodied intelligent robot dog】DOGZILLA-Lite is the world's first educational robot dog that integrates multimodal large models + embodied intelligence. Built-in Raspberry Pi module supports multiple AI visual functions such as face detection and object recognition. It is not only a walking robot, but also an AI partner that can understand images, voices, environments, and make autonomous decisions. 【Robot arm expansion & AI vision technology】Supports the extension of a 3DOF robotic arm, enabling autonomous object grasping and handling. A pre-programmed GUI system with built-in AI vision/voice programs enables numerous exciting functions such as 3D object recognition, color, face, emotion recognition, and motion detection, providing endless possibilities for creative projects. Note: The robotic arm is limited to grasping the standard EVA cubes/balls. 【Multiple control methods and real-time images】You can easily control DOGZILLA-Lite through the XGO APP and PC software for Android and iOS devices. In addition, DOGZILLA-Lite can also transmit real-time images to the application, bringing a first-person perspective control experience. Yahboomrobot can control the movement of the robot dog and support video images, but cannot control the robotic arm. 【Gait planning, free adjustment】DOGZILLA-Lite integrates inverse kinematics algorithms to accurately control the ground contact time, lift time and lift height of each leg. You can easily adjust these parameters to achieve different gaits. Provides detailed inverse kinematics analysis and source code of inverse kinematics functions. 【Why choose DOGZILLA-Lite? 】It is not just a toy, but a ticket to the future. Students use it to understand AI principles, geeks use it to develop autonomous driving algorithms, and families use it as an interactive technology partner. Yahboom provides AI visual interaction, Open CV, AI LLM open source data code and technical support.