<aside> <img src="/icons/burst_gray.svg" alt="/icons/burst_gray.svg" width="40px" />

Domains: Embedded Systems, Human-Computer Interaction, Machine Learning, Wearable Electronics, ESP-IDF, PCB Designing

</aside>

https://github.com/Shri-2112/Flexsonic

Overview

FlexSonic is a compact, wearable assistive device that translates hand gestures into real-time spoken audio, enabling smoother communication for individuals with speech impairments. Built with an ESP32 microcontroller, flex sensors, an MPU6050 IMU, and a DFPlayer Mini, it integrates embedded systems and machine learning for accurate, adaptable gesture recognition. Developed through breadboard prototyping, perfboard testing, and custom PCB design, FlexSonic aims to offer a low-cost, user-friendly solution that promotes independence, reliability, and inclusivity.

https://drive.google.com/file/d/1t1Zu3dBZ3pLLL1ICxsWJ5ZRn1jNXS9co/view?usp=drive_link

Key Concepts

Gesture Recognition Hand movements are detected using five flex sensors (finger bending) and an MPU6050 IMU (wrist orientation). These signals form the input basis for classifying user gestures.

Embedded Systems The ESP32 microcontroller acts as the processing hub, integrating sensor inputs, executing recognition logic, and triggering audio output through the DFPlayer Mini.

Machine Learning Integration Rule-based recognition was extended with machine learning models trained on gesture datasets, improving robustness to user variations and environmental noise.

Modular PCB Design Two compact PCBs were developed: one for sensing and computation (ESP32 + IMU + flex sensors), and another for audio playback and power management (DFPlayer Mini + battery). This modularity improves stability, reduces noise interference, and simplifies upgrades.

Real-Time Audio Output Pre-recorded audio messages are stored on the DFPlayer Mini and played through a speaker, ensuring fast, clear, and consistent communication.

Approach and Workflow

1. Prototyping Phases Began with a breadboard setup to validate core sensing and audio playback, moved to a perfboard prototype for more stable testing, and finalized with a custom PCB designed for compactness, wearability, and reliable everyday use. 2. Software Implementation Developed in Embedded C to handle sensor initialization, data acquisition, and gesture mapping, using I²C for IMU communication and UART for DFPlayer Mini control, with a modular structure for easier debugging and future expansion. 3. Machine Learning Integration Collected gesture datasets from flex sensors and IMU, trained lightweight models deployable on ESP32, improving recognition accuracy and making the system adaptable to different users and conditions. 4. Testing and Results Demonstrated low latency, accurate gesture recognition, and clear audio output, with ML integration significantly enhancing performance, resulting in a practical, comfortable, and user-friendly wearable device.

The project aims to build a wearable embedded system integrating flex sensors, an IMU, and an ESP32 to translate hand gestures into real-time audible speech.