Embodied Robot Manipulation Planning Training Platform ALO-LE4

Embodied Robot Manipulation Planning Training Platform ALO-LE4

ALO-LE4

Dual 5-axis arms. An embodied robot operation and execution training platform based on the ACT architecture end-to-end solution. Ready-to-use out of the box with no deployment required. Can serve as a data collection platform and is also suitable for research on imitation learning and end-to-end intelligent control solutions.

Applicable Audience/Scenarios

Suitable for embodied intelligence, imitation learning, end-to-end control, robotics, and computer vision courses and projects

Highlights

  • Two-in-one platform: data collection plus intelligent training and validation
  • Rapid, integrated deployment with independent reset for each subsystem
  • Progressive teaching workflow from environment setup to model training

Product Features

Unified data collection and training

Built on the ACT framework, the platform supports motion capture, model training, and validation. Adjustable lighting accommodates varied scenes, while the desktop setup ensures stable, repeatable experiments.

Fast deployment and recovery

Independent reset buttons on the master arm, slave arm, and OS allow quick restarts without building extra scenes or lighting setups, greatly simplifying debugging.

Progressive teaching design

Covering software environment configuration, hardware setup, and training workflows, the curriculum serves both classroom teaching and research requirements, guiding learners step-by-step.

Lab Scenarios

Configuration

Sensor Configuration

Core sensing modules supply the data streams needed for imitation learning and vision-enabled manipulation.

  • Dual HD cameras (top + side) for colour/position detection and dataset capture
  • Adjustable environmental lighting to emulate diverse illumination
  • Joint-angle sensing for the master/slave arms
  • Expansion interfaces for additional vision or force modules

Experiments

Experiment tracks progress from end-to-end deployment to vision and manipulator control, building a complete understanding of embodied manipulation.

End-to-end deployment & training

Follow ACT-based workflows from environment setup through data collection and deployment.

  • Environment setupConfigure CONDA, FFMPEG, Python dependencies (2 hours)
  • Install Lerobot frameworkSet up the Lerobot environment (2 hours)
  • Configure servo motorsTune manipulator servo parameters (1 hour)
  • Camera configurationCalibrate and connect imaging devices (1 hour)
  • Master/slave calibrationVerify master-arm capture and slave-arm following (2 hours)
  • Teleoperation data captureRecord video, joint angles, and system settings (2 hours)
  • Model trainingRecommend NVIDIA 4060-class GPU or better (2 hours)
  • Model deploymentDeploy and validate autonomous task execution (4 hours)

Extended module · AI vision

Train learners to integrate visual perception with robotic manipulation.

  • YOLO deploymentDeploy YOLO detection models (2 hours)
  • Dataset annotationLabel training data for vision tasks (2 hours)
  • Model training & deploymentTrain and deploy vision models (2 hours)
  • Workpiece inspectionImplement workpiece recognition and positioning (2 hours)
  • Visual pick-and-placeMap vision outputs to manipulator actions (4 hours)

Extended module · Robotic manipulator control

Focus on kinematics and interpolation control for manipulators.

  • Manipulator kinematics controlDevelop forward/inverse kinematics control (4 hours)
  • Linear interpolation controlExecute linear interpolation trajectories (2 hours)
  • Circular interpolation controlExecute circular interpolation trajectories (2 hours)
  • Stacking and handling tasksConduct stacking and handling exercises (4 hours)

Knowledge Base

Get more technical documentation, tutorials, and FAQs about this product.

View Details