<aside> <img src="/icons/burst_gray.svg" alt="/icons/burst_gray.svg" width="40px" />

Domains: Robotics, Computer Vision, ROS, ESP-IDF

</aside>

https://github.com/vedantmalkar/Ballerina-Cappucina

Overview

Ballerina-Cappucina is an omnidirectional robot designed to autonomously glide across the floor and collect scattered colorful balls. The robot leverages a combination of ESP32, ROS, and OpenCV to detect specific colored objects, navigate to them, and secure them using a custom-designed collection mechanism.

final_movement_ballerina.mp4

image.png

Key Concepts

Omnidirectional Movement A robot movement system that allows free motion in any direction without changing orientation

Computer Vision with OpenCV Computer vision library used to detect multiple colours and shapes useful for detecting balls

Autonomous Navigation The robot autonomously scans its environment, detects objects, and navigates towards them without human intervention.

ROS (Robot Operating System) A flexible framework for building robot applications, providing tools, libraries, and conventions to simplify the task of creating complex robot behaviors.

ESP32 Integration An integrated development environment for wireless communication, used to control the robot's movement and other hardware components.

CAD Hardware Design Using Onshape software to design and model the robot's physical components. This includes creating detailed 3D models of the omnidirectional base and the ball trapping mechanism.

Approach and Workflow

  1. Design the robot and trapping mechanism using CAD Plan the overall structure, including the omnidirectional base and ball trapping mechanism. Using Onshape to design the hardware components, ensuring proper fit, functionality, and ease of assembly. The CAD models will help visualize the design and facilitate adjustments before physical construction.
  2. Develop color detection code in OpenCV Create a color detection algorithm using OpenCV in Python. Fine-tune the code to detect specific colored balls, handling variations in lighting conditions for accurate detection.
  3. Perform virtual testing in Gazebo and RViz Simulate robot movement and vision systems using Gazebo and RViz. Test the initial models in the virtual environment to verify navigation, detection, and overall system performance.
  4. Build the physical robot and test communication Construct the physical robot, assembling the omnidirectional base, sensors, and ball trapping mechanism. Test communication between controllers (Jetson and ESP32) to ensure smooth operation.
  5. Conduct error analysis and refine the system Analyze errors from physical testing, addressing issues such as vision inaccuracies, movement delays, or mechanical faults in wheel power/movement. Try and improve performance and reliability.