5 min readBy Muhammad Hazimi Bin Yusri

When Machine Learning Isn't the Answer: My EMG Control System Reality Check

I spent weeks building a Random Forest classifier for EMG gesture control, achieving 84% accuracy. Then I discovered that sometimes the 'simpler' solution is actually better. Here's why I ditched ML for threshold-based classification.

#Machine Learning#EMG#Real-time Systems#Performance#Medical Tech

When Machine Learning Isn't the Answer: My EMG Control System Reality Check

Okay, so here's something that completely changed how I think about choosing algorithms for real-time systems. I was working on this EMG-controlled rehabilitation system (basically reading muscle signals to control a robotic hand and wheelchair), and like any good engineering student, I immediately thought: "This needs machine learning!"

Spoiler alert: it really didn't.

The ML Hype Train

I teamed up with my classmate Danisha to build what we thought would be this amazing Random Forest classifier. The idea was simple - train it on EMG patterns and it would recognize different hand gestures. We were getting 84% accuracy in our initial tests and feeling pretty good about ourselves.

But then reality hit us with some uncomfortable truths.

The Performance Numbers Don't Lie

Here's what our "successful" ML system actually looked like in practice:

  • Response time: 3-10 seconds (ouch)
  • Accuracy: 84% (sounds good on paper)
  • Setup time: Hours of calibration per user
  • Generalization: Pretty terrible across different users
  • Real-time suitability: Absolutely not

For a rehabilitation system where someone is trying to control a robotic hand or wheelchair, 3-10 second delays are completely unusable. Imagine trying to grab a cup of water and having to wait 5 seconds between thinking "close hand" and the robot actually responding. It's not just frustrating - it's potentially dangerous.

The Threshold Revelation

After weeks of trying to optimize the Random Forest (different features, hyperparameter tuning, you name it), I had a bit of a "what if we just..." moment. What if we ignored all the ML complexity and just used good old-fashioned threshold detection?

Here's the approach I ended up implementing:

def process_emg_with_imu(emg_data, imu_data):
    # Dynamic amplitude thresholding
    threshold = baseline_amplitude * threshold_multiplier
    
    # Gesture detection with state management
    if signal_amplitude > threshold and not in_cooldown:
        gesture = classify_gesture(emg_pattern)
        orientation = get_palm_orientation(imu_data)
        
        # Context-aware command generation
        command = interpret_gesture_with_context(gesture, orientation)
        return command

The key insight was adding IMU (accelerometer) data to detect palm orientation. This meant the same EMG gesture could do different things based on how your hand was positioned:

  • Fist gesture + palm down = wheelchair forward
  • Fist gesture + palm up = wheelchair backward

Simple, but brilliant.

The Results Were Night and Day

Threshold-based system performance:

  • Response time: Under 500ms consistently
  • Setup time: Under 30 minutes (down from hours!)
  • Accuracy: >95% with way fewer false positives
  • Generalization: Much better across users
  • Real-time capability: Actually usable

The difference was so dramatic that I started questioning everything I thought I knew about when to use ML.

What I Learned About Signal Processing

The threshold approach worked better because EMG signals have some really nice properties when you're looking for deliberate gestures:

  1. Clear signal amplitude differences - When someone intentionally flexes, it's obvious in the EMG data
  2. Consistent patterns - The same person making the same gesture gives pretty similar signals
  3. Good signal-to-noise ratio - With proper filtering (Butterworth bandpass 10-150Hz, notch filter for 50Hz power line interference)

The filtering pipeline was actually more important than the classification algorithm:

Raw EMG → Butterworth Bandpass (10-150Hz) → Notch Filter (50Hz) → 
Dynamic Thresholding → Gesture Classification + IMU Context

When ML Actually Makes Sense

Don't get me wrong - I'm not anti-ML. But this project taught me that ML should solve a problem you actually have, not just be the default approach because it sounds cooler.

ML makes sense when:

  • You have complex patterns that are hard to detect with simple rules
  • You have tons of training data
  • Latency isn't critical
  • You need to generalize across very different conditions

Threshold/rule-based makes sense when:

  • You need real-time performance
  • The signals have clear, consistent patterns
  • You can define good features manually
  • Setup time matters

The IMU Game Changer

The real breakthrough was realizing that adding IMU data gave us way more information than trying to extract more features from EMG alone. Palm orientation is such a natural way to add context to gestures.

It also solved a huge UX problem - instead of needing separate gestures for "forward" and "backward," users could use the same gesture with different hand orientations. Much more intuitive for rehabilitation patients.

Practical Lessons for Real-Time Systems

  1. Latency requirements should drive algorithm choice - If you need sub-second response times, that eliminates a lot of ML approaches right away.

  2. Simple can be better - Don't add complexity unless it's solving a real problem.

  3. Domain knowledge beats fancy algorithms - Understanding EMG signal characteristics was more valuable than any classifier.

  4. User experience trumps accuracy metrics - 84% accuracy with 10-second delays is worse than 95% accuracy with 0.5-second delays.

  5. Multi-modal sensing is powerful - Adding the IMU gave us way more information than trying to squeeze more out of EMG alone.

Current System Performance

The final threshold-based system:

  • Controls both InMoov robotic hand (serial) and wheelchair robot (WiFi)
  • Recognizes 5 distinct gestures with context
  • <1 second latency for hand control
  • <5 second latency for wheelchair commands
  • Works reliably over 30+ minute sessions
  • 30-minute setup time for new users

The Takeaway

Machine learning is a powerful tool, but it's not magic. Sometimes the "boring" engineering solution is exactly what you need. The next time you're tempted to throw ML at a problem, ask yourself:

  • What are my actual performance requirements?
  • Do I have a problem that simple rules can't solve?
  • Is the complexity worth it?

In this case, good signal processing + domain knowledge + simple thresholds beat a fancy ML classifier hands down. And that's a lesson I'll carry into every future project.


Working on EMG systems or real-time signal processing? I'd love to chat about the challenges - reach out through my contact page!

Back to Blog
Share this post: