Multichannel EMG-based gesture recognition utilizing advanced machine learning techniques: A random forest classifier for high-precision signal classification

Dheeraj Tallapragada * and Vedant Sagare

Dublin High School Dublin, CA, U.S.A.
 
Research Article
World Journal of Advanced Research and Reviews, 2024, 24(02), 323–332
Article DOI: 10.30574/wjarr.2024.24.2.3332
 
Publication history: 
Received 18 September 2024; revised on 31 October 2024; accepted on 02 November 2024
 
Abstract: 
This research examines how advanced machine learning algorithms can be used to classify multichannel electromyographic (EMG) signals with a high level of accuracy to assist in recognizing hand gestures. The goal is to create a robust and scalable system for gesture-based virtual control using EMG signals with potential applications in assistive technologies, rehabilitation, and human-computer interaction. Data were gathered using a MYO Thalmic bracelet containing eight EMG sensors on thirty-six subjects, and a Random Forest classifier was trained to identify seven distinct types of hand gestures (rest, fist clench, wrist flexion/extension, and radial/ulnar deviations).
The machine learning pipeline included extensive preprocessing (i.e., EMG signal normalization and signal feature extraction; root mean square, waveform length, and zero crossing rate) and several hyperparameter tuning procedures to improve model performance. The Random Forest model (100 decision trees) achieved an overall classification accuracy of 98.68%, with a range of accuracies for each class (e.g., 95.2% wrist flexion and 91.8% ulnar deviation) when evaluated using cross-validation (i.e., average F1-score = 0.92, precision = 0.94, recall = .91).
Overall, the study provides strong evidence for the effectiveness of ensemble learning methods at analyzing complex, multidimensional EMG signals. The high classification accuracy reported, in particular, demonstrates that the system could function for real-time recognition of hand gestures in a virtual environment. Ultimately, the initial work sets the stage for future exploration of a model that may be integrated with actuation models to control prosthetic limbs, virtual actors/avatars, and robotic devices. By demonstrating a scalable and efficient method of gesture recognition using EMG signals, these early findings enable future pathways and possibilities to design innovative, assistive solutions for digital systems that increase accessibility and interaction for users who are motor impaired or have a limited range of motion.
 
Keywords: 
Human-computer interaction (HCI); Electromyography (EMG); Motion; Root Mean Square (RMS); Machine Learning
 
Full text article in PDF: 
Share this