
Embodied Robot Manipulation Planning Training Platform ALO-LE4
Dual 5-axis arms. An embodied robot operation and execution training platform based on the ACT architecture end-to-end solution. Ready-to-use out of the box with no deployment required. Can serve as a data collection platform and is also suitable for research on imitation learning and end-to-end intelligent control solutions.
Applicable Audience/Scenarios
Suitable for embodied intelligence, imitation learning, end-to-end control, robotics, and computer vision courses and projects
Highlights
- Two-in-one platform: data collection plus intelligent training and validation
- Rapid, integrated deployment with independent reset for each subsystem
- Progressive teaching workflow from environment setup to model training
Product Features
Unified data collection and training
Built on the ACT framework, the platform supports motion capture, model training, and validation. Adjustable lighting accommodates varied scenes, while the desktop setup ensures stable, repeatable experiments.
Fast deployment and recovery
Independent reset buttons on the master arm, slave arm, and OS allow quick restarts without building extra scenes or lighting setups, greatly simplifying debugging.
Progressive teaching design
Covering software environment configuration, hardware setup, and training workflows, the curriculum serves both classroom teaching and research requirements, guiding learners step-by-step.
Lab Scenarios
Configuration
Sensor Configuration
Core sensing modules supply the data streams needed for imitation learning and vision-enabled manipulation.
- Dual HD cameras (top + side) for colour/position detection and dataset capture
- Adjustable environmental lighting to emulate diverse illumination
- Joint-angle sensing for the master/slave arms
- Expansion interfaces for additional vision or force modules
Experiments
Experiment tracks progress from end-to-end deployment to vision and manipulator control, building a complete understanding of embodied manipulation.
End-to-end deployment & training
Follow ACT-based workflows from environment setup through data collection and deployment.
- Environment setup:Configure CONDA, FFMPEG, Python dependencies (2 hours)
- Install Lerobot framework:Set up the Lerobot environment (2 hours)
- Configure servo motors:Tune manipulator servo parameters (1 hour)
- Camera configuration:Calibrate and connect imaging devices (1 hour)
- Master/slave calibration:Verify master-arm capture and slave-arm following (2 hours)
- Teleoperation data capture:Record video, joint angles, and system settings (2 hours)
- Model training:Recommend NVIDIA 4060-class GPU or better (2 hours)
- Model deployment:Deploy and validate autonomous task execution (4 hours)
Extended module · AI vision
Train learners to integrate visual perception with robotic manipulation.
- YOLO deployment:Deploy YOLO detection models (2 hours)
- Dataset annotation:Label training data for vision tasks (2 hours)
- Model training & deployment:Train and deploy vision models (2 hours)
- Workpiece inspection:Implement workpiece recognition and positioning (2 hours)
- Visual pick-and-place:Map vision outputs to manipulator actions (4 hours)
Extended module · Robotic manipulator control
Focus on kinematics and interpolation control for manipulators.
- Manipulator kinematics control:Develop forward/inverse kinematics control (4 hours)
- Linear interpolation control:Execute linear interpolation trajectories (2 hours)
- Circular interpolation control:Execute circular interpolation trajectories (2 hours)
- Stacking and handling tasks:Conduct stacking and handling exercises (4 hours)
Knowledge Base
Get more technical documentation, tutorials, and FAQs about this product.

