LAB10: EMG Biomedical

Biomedical Signal Processing and Edge Control

PDF Textbook Reference

For detailed theoretical foundations, mathematical proofs, and algorithm derivations, see Chapter 10: Biomedical Signal Processing: EMG-Based Edge ML in the PDF textbook.

The PDF chapter includes: - Complete mathematical foundations of EMG signal generation - Detailed derivations of filtering and envelope extraction - In-depth coverage of signal processing theory (FFT, bandpass filters) - Comprehensive analysis of muscle activation patterns - Theoretical foundations for bio-signal classification algorithms

Open In Colab

Open In Colab

Download Notebook

Learning Objectives

By the end of this lab you should be able to:

  • Explain how EMG (electromyography) signals are generated and measured
  • Implement basic EMG signal processing (filtering, rectification, envelope extraction)
  • Extract time-domain features (MAV, RMS, ZC, WL) and train simple classifiers
  • Map EMG-derived gestures to actuator control (e.g., servo grip)
  • Reason about deploying EMG classifiers on resource-constrained edge devices (latency, memory, and energy)

Theory Summary

Electromyography (EMG) measures the electrical signals produced when muscles contract, providing a natural interface between human intent and machine control. When your brain sends a neural signal to a muscle, motor units (groups of muscle fibers) generate action potentials that propagate as electrical waves. These signals—ranging from 0-10 mV in amplitude and 20-500 Hz in frequency—can be detected at the skin surface using electrodes placed along the muscle fiber direction.

The signal processing pipeline transforms noisy raw EMG into actionable information through four stages: (1) Bandpass filtering (20-500 Hz) removes DC offset and high-frequency noise, (2) Full-wave rectification converts the bipolar signal to unipolar by taking absolute values, (3) Envelope extraction via moving-average smoothing reveals the underlying muscle activation intensity, and (4) Feature extraction computes statistics like MAV (Mean Absolute Value), RMS (Root Mean Square), Zero Crossings, and Waveform Length over sliding windows (typically 100-200 ms).

Classification approaches range from simple threshold-based detection (fast, low memory, works for 2-3 gestures) to ML-based pattern recognition using Random Forests or tiny neural networks (5-8 input features → 8-16 hidden neurons → 4-6 gesture classes). A crucial challenge is per-user calibration: EMG amplitude varies 10x between individuals due to muscle mass, skin impedance, and electrode placement. Production systems must implement Maximum Voluntary Contraction (MVC) calibration, adjusting thresholds to each user’s signal range. For edge deployment, Int8 quantized models (300 bytes) achieve 90%+ accuracy with <10 ms inference on Arduino-class devices, enabling responsive prosthetic control with <100 ms end-to-end latency from muscle activation to servo motion.

Key Concepts at a Glance
  • EMG Signal Characteristics: 0-10 mV amplitude, 20-500 Hz frequency, most energy at 50-150 Hz; surface EMG is non-invasive but lower SNR than intramuscular
  • Electrode Placement: Differential pair 2-3 cm apart along muscle belly, reference on electrically neutral bone; clean skin with alcohol for low impedance
  • Signal Processing Pipeline: Raw → Bandpass filter (20-500 Hz) → Rectify → Moving average (50-100 samples) → Features
  • Time-Domain Features: MAV = mean intensity, RMS = power, ZC = frequency proxy, WL = complexity; single-pass O(N) computation for efficiency
  • Threshold Classification: Simple hysteresis-based state machines for 2-4 classes; requires per-user calibration via MVC (Maximum Voluntary Contraction)
  • ML Features: 5-feature vector [MAV, RMS, Variance, ZC, WL] trains tiny models: Dense(5→8→4) ≈ 200 parameters ≈ 300 bytes quantized
  • Gesture-to-Action Mapping: Classifier output (0=rest, 1=light, 2=grip) → servo positions (0°, 45°, 180°) with low-pass filtering to prevent jitter
Common Pitfalls
  1. Not calibrating per user: EMG amplitude varies 10x between people. A threshold that works for you will fail on others. Always implement MVC calibration asking users to perform maximum contraction and rest, then set thresholds at 20% and 80% of their personal range.

  2. Electrode placement inconsistency: Moving electrodes by 1 cm can halve signal amplitude. Mark placement with skin-safe pen for reproducible sessions. Use fresh electrodes—dried gel creates 10x impedance increase and unusable noise.

  3. Ignoring power supply for servos: Servos draw 500mA-2A under load. Powering from Arduino’s 5V pin causes brownout resets and corrupted data. Use external 5-6V supply with shared ground.

  4. 50/60 Hz power line interference: AC mains create dominant 50/60 Hz noise. Implement notch filter or ensure differential amplifier (MyoWare/AD8232) has high CMRR (>80 dB). Move setup away from power adapters and fluorescent lights.

  5. Window size too small or too large: Windows <100 ms lack statistical stability (features fluctuate wildly). Windows >300 ms add latency making control feel sluggish. For 500 Hz sampling, 50-100 sample windows (100-200 ms) balance responsiveness and noise rejection.

Quick Reference

Key Formulas

Time-Domain Features (computed over window of N samples)

\[ \text{MAV} = \frac{1}{N}\sum_{i=1}^{N}|x_i| \quad \text{(Mean Absolute Value)} \]

\[ \text{RMS} = \sqrt{\frac{1}{N}\sum_{i=1}^{N}x_i^2} \quad \text{(Root Mean Square)} \]

\[ \text{ZC} = \sum_{i=1}^{N-1}\begin{cases}1 & \text{if } x_i \times x_{i+1} < 0 \\ 0 & \text{otherwise}\end{cases} \quad \text{(Zero Crossings)} \]

\[ \text{WL} = \sum_{i=1}^{N-1}|x_{i+1} - x_i| \quad \text{(Waveform Length)} \]

Hysteresis Threshold Logic \[ \text{State Change} = \begin{cases} \text{Activate} & \text{if } x > \text{Threshold} + \text{Hysteresis} \\ \text{Deactivate} & \text{if } x < \text{Threshold} - \text{Hysteresis} \\ \text{Maintain} & \text{otherwise} \end{cases} \]

Important Parameter Values

Parameter Typical Value Range Impact
Sampling Rate 500-1000 Hz 200-2000 Hz Nyquist requires >400 Hz for 200 Hz signals
Bandpass Filter 20-500 Hz 10-500 Hz Removes DC drift and high-freq noise
Window Size 50-100 samples 25-200 100-200 ms at 500 Hz; larger = smoother but slower
Window Stride 10-25 samples 1-50 50% overlap typical; stride=1 for continuous
Hysteresis 20-50 ADC units 10-100 Prevents rapid state oscillation near threshold
Rest Threshold 20% of MVC 10-30% Below this = rest/no activation
Activation Threshold 40-60% of MVC 30-80% Above this = intentional gesture
Servo Update Rate 20-50 Hz 10-100 Hz Balance responsiveness vs jitter
Model Size 200-500 params 100-2000 Tiny MLP fits on any Arduino
Inference Latency 1-10 ms 0.5-20 ms Target <100 ms end-to-end for natural control

Essential Code Patterns

const int WINDOW_SIZE = 50;
int buffer[WINDOW_SIZE];
int bufferIndex = 0;
long runningSum = 0;

int getFilteredEMG() {
    int raw = analogRead(EMG_PIN);
    runningSum -= buffer[bufferIndex];
    buffer[bufferIndex] = raw;
    runningSum += raw;
    bufferIndex = (bufferIndex + 1) % WINDOW_SIZE;
    return runningSum / WINDOW_SIZE;
}
void extractFeatures(float* window, int size, float& mav, float& rms) {
    float sum = 0, sumSquares = 0;
    for (int i = 0; i < size; i++) {
        float val = window[i];
        sum += abs(val);
        sumSquares += val * val;
    }
    mav = sum / size;
    rms = sqrt(sumSquares / size);
}
int classifyWithHysteresis(int value, int& currentState) {
    const int THRESHOLD = 400;
    const int HYSTERESIS = 30;

    if (currentState == 0 && value > THRESHOLD + HYSTERESIS) {
        currentState = 1;
    } else if (currentState == 1 && value < THRESHOLD - HYSTERESIS) {
        currentState = 0;
    }
    return currentState;
}

Complete Code Examples

This section provides production-ready Arduino code for EMG gesture control systems. Each example builds upon the previous, demonstrating best practices for biomedical signal processing on resource-constrained microcontrollers.

1. Complete Arduino Gesture Control Sketch

This complete sketch reads an EMG sensor, applies filtering and feature extraction, classifies gestures using a threshold-based approach, and controls a servo motor. Suitable for Arduino Uno/Nano with 2KB RAM.

/*
 * EMG Gesture Control System
 * Hardware: MyoWare/AD8232 EMG sensor on A0, Servo on pin 9
 * Features: Moving average filter, MAV feature extraction,
 *           hysteresis-based classification, servo control
 */

#include <Servo.h>

// Pin definitions
const int EMG_PIN = A0;
const int SERVO_PIN = 9;

// Signal processing parameters
const int FILTER_WINDOW = 50;      // 50 samples for moving average
const int FEATURE_WINDOW = 100;    // 100 samples for MAV calculation
const int SAMPLE_RATE = 500;       // 500 Hz sampling rate
const int SAMPLE_INTERVAL = 1000000 / SAMPLE_RATE; // Microseconds

// Classifier parameters (will be set during calibration)
int restThreshold = 100;            // Below this = rest
int activeThreshold = 300;          // Above this = active gesture
const int HYSTERESIS = 30;          // Prevents flickering

// Servo positions for different gestures
const int SERVO_REST = 0;           // Rest position (degrees)
const int SERVO_LIGHT = 90;         // Light grip
const int SERVO_STRONG = 180;       // Strong grip

// State variables
Servo gripServo;
int currentGesture = 0;             // 0=rest, 1=light, 2=strong
unsigned long lastSampleTime = 0;

// Moving average filter buffers
int filterBuffer[FILTER_WINDOW];
int filterIndex = 0;
long filterSum = 0;

// Feature extraction buffer (circular buffer)
int featureBuffer[FEATURE_WINDOW];
int featureIndex = 0;
bool featureBufferFilled = false;

void setup() {
  Serial.begin(115200);
  pinMode(EMG_PIN, INPUT);

  // Initialize servo
  gripServo.attach(SERVO_PIN);
  gripServo.write(SERVO_REST);

  // Initialize filter buffer
  for (int i = 0; i < FILTER_WINDOW; i++) {
    filterBuffer[i] = 0;
  }

  // Initialize feature buffer
  for (int i = 0; i < FEATURE_WINDOW; i++) {
    featureBuffer[i] = 0;
  }

  Serial.println("EMG Gesture Control System");
  Serial.println("Starting calibration in 3 seconds...");
  delay(3000);

  // Run calibration routine
  calibrate();

  Serial.println("Calibration complete. Starting gesture control...");
  Serial.println("Gesture,MAV,ServoAngle");
}

void loop() {
  unsigned long currentTime = micros();

  // Maintain precise sampling rate
  if (currentTime - lastSampleTime >= SAMPLE_INTERVAL) {
    lastSampleTime = currentTime;

    // Read and filter EMG signal
    int rawEMG = analogRead(EMG_PIN);
    int filteredEMG = applyMovingAverage(rawEMG);

    // Add to feature window and extract features when ready
    addToFeatureWindow(filteredEMG);

    if (featureBufferFilled) {
      // Calculate MAV feature
      int mav = calculateMAV();

      // Classify gesture with hysteresis
      int gesture = classifyGesture(mav);

      // Update servo if gesture changed
      if (gesture != currentGesture) {
        currentGesture = gesture;
        updateServo(currentGesture);
      }

      // Output for Serial Plotter (every 10th sample to reduce clutter)
      static int plotCounter = 0;
      if (++plotCounter >= 10) {
        plotCounter = 0;
        Serial.print(currentGesture);
        Serial.print(",");
        Serial.print(mav);
        Serial.print(",");
        Serial.println(gripServo.read());
      }
    }
  }
}

// O(1) moving average filter using circular buffer
int applyMovingAverage(int newSample) {
  // Subtract oldest value from sum
  filterSum -= filterBuffer[filterIndex];

  // Add new value to buffer and sum
  filterBuffer[filterIndex] = newSample;
  filterSum += newSample;

  // Advance circular buffer index
  filterIndex = (filterIndex + 1) % FILTER_WINDOW;

  // Return average
  return filterSum / FILTER_WINDOW;
}

// Add sample to feature extraction window
void addToFeatureWindow(int sample) {
  featureBuffer[featureIndex] = sample;
  featureIndex = (featureIndex + 1) % FEATURE_WINDOW;

  // Mark buffer as filled after first complete cycle
  if (featureIndex == 0) {
    featureBufferFilled = true;
  }
}

// Calculate Mean Absolute Value (MAV) over feature window
int calculateMAV() {
  long sum = 0;

  // Calculate mean of rectified signal
  for (int i = 0; i < FEATURE_WINDOW; i++) {
    // Note: Already rectified by moving average of ADC readings
    sum += featureBuffer[i];
  }

  return sum / FEATURE_WINDOW;
}

// Classify gesture using threshold with hysteresis
int classifyGesture(int mav) {
  static int state = 0;

  // Three-state classifier: 0=rest, 1=light, 2=strong
  if (state == 0) {
    // Currently at rest
    if (mav > restThreshold + HYSTERESIS) {
      state = 1; // Transition to light
    }
  } else if (state == 1) {
    // Currently light grip
    if (mav < restThreshold - HYSTERESIS) {
      state = 0; // Transition to rest
    } else if (mav > activeThreshold + HYSTERESIS) {
      state = 2; // Transition to strong
    }
  } else if (state == 2) {
    // Currently strong grip
    if (mav < activeThreshold - HYSTERESIS) {
      state = 1; // Transition to light
    }
  }

  return state;
}

// Update servo position based on gesture
void updateServo(int gesture) {
  int targetAngle;

  switch(gesture) {
    case 0:
      targetAngle = SERVO_REST;
      break;
    case 1:
      targetAngle = SERVO_LIGHT;
      break;
    case 2:
      targetAngle = SERVO_STRONG;
      break;
    default:
      targetAngle = SERVO_REST;
  }

  // Smooth servo movement to reduce jitter
  int currentAngle = gripServo.read();
  if (abs(targetAngle - currentAngle) > 5) {
    gripServo.write(targetAngle);
  }
}

// Calibration routine: measures rest and maximum contraction
void calibrate() {
  int restSamples = 0;
  long restSum = 0;
  int mvcSamples = 0;
  long mvcSum = 0;

  // Measure rest baseline
  Serial.println("RELAX your muscle completely for 3 seconds...");
  delay(1000);

  unsigned long startTime = millis();
  while (millis() - startTime < 3000) {
    int raw = analogRead(EMG_PIN);
    restSum += raw;
    restSamples++;
    delay(10);
  }
  int restBaseline = restSum / restSamples;

  Serial.print("Rest baseline: ");
  Serial.println(restBaseline);

  delay(1000);

  // Measure maximum voluntary contraction
  Serial.println("CONTRACT your muscle as hard as possible for 3 seconds...");
  delay(1000);

  startTime = millis();
  while (millis() - startTime < 3000) {
    int raw = analogRead(EMG_PIN);
    mvcSum += raw;
    mvcSamples++;
    delay(10);
  }
  int mvcPeak = mvcSum / mvcSamples;

  Serial.print("MVC peak: ");
  Serial.println(mvcPeak);

  // Set thresholds as percentages of user's range
  int range = mvcPeak - restBaseline;
  restThreshold = restBaseline + (range * 20) / 100;      // 20% of range
  activeThreshold = restBaseline + (range * 60) / 100;    // 60% of range

  Serial.print("Rest threshold: ");
  Serial.println(restThreshold);
  Serial.print("Active threshold: ");
  Serial.println(activeThreshold);
}

Key Features:

  • Precise timing: Uses micros() for 500 Hz sampling without drift
  • O(1) filtering: Circular buffer moving average with constant-time updates
  • User calibration: Automatic MVC-based threshold adaptation
  • Hysteresis: Prevents rapid state oscillation near thresholds
  • Memory efficient: ~600 bytes for buffers (fits on Arduino Uno)
  • Smooth servo control: Deadband prevents jitter from small fluctuations

Exercise: Modify the calibration routine to save thresholds to EEPROM so users don’t need to recalibrate every startup.

2. Efficient Circular Buffer Implementation

A reusable circular buffer class for windowed signal processing. Provides O(1) insertion and O(1) mean calculation, essential for real-time EMG processing on microcontrollers.

/*
 * CircularBuffer - Memory-efficient FIFO for real-time signal processing
 * Optimized for Arduino with minimal memory overhead
 */

template <typename T, int SIZE>
class CircularBuffer {
private:
  T buffer[SIZE];
  int writeIndex;
  int count;
  long runningSum;        // For O(1) mean calculation
  bool trackSum;

public:
  CircularBuffer(bool enableMean = false)
    : writeIndex(0), count(0), runningSum(0), trackSum(enableMean) {
    // Initialize buffer to zero
    for (int i = 0; i < SIZE; i++) {
      buffer[i] = 0;
    }
  }

  // Add new sample (FIFO - oldest is overwritten)
  void push(T value) {
    if (trackSum) {
      // Update running sum for O(1) mean
      runningSum -= buffer[writeIndex];
      runningSum += value;
    }

    buffer[writeIndex] = value;
    writeIndex = (writeIndex + 1) % SIZE;

    if (count < SIZE) {
      count++;
    }
  }

  // Get value at index (0 = oldest, SIZE-1 = newest)
  T get(int index) const {
    if (index >= count) return 0;

    // Calculate actual buffer position
    int readIndex = (writeIndex - count + index + SIZE) % SIZE;
    return buffer[readIndex];
  }

  // Get most recent value
  T latest() const {
    if (count == 0) return 0;
    int lastIndex = (writeIndex - 1 + SIZE) % SIZE;
    return buffer[lastIndex];
  }

  // Check if buffer is full
  bool isFull() const {
    return count == SIZE;
  }

  // Get number of samples currently in buffer
  int size() const {
    return count;
  }

  // Get buffer capacity
  int capacity() const {
    return SIZE;
  }

  // Clear buffer
  void clear() {
    writeIndex = 0;
    count = 0;
    runningSum = 0;
    for (int i = 0; i < SIZE; i++) {
      buffer[i] = 0;
    }
  }

  // O(1) mean calculation (only if tracking enabled)
  float mean() const {
    if (!trackSum || count == 0) return 0.0;
    return (float)runningSum / count;
  }

  // O(N) MAV calculation over current window
  float mav() const {
    if (count == 0) return 0.0;
    long sum = 0;
    for (int i = 0; i < count; i++) {
      sum += abs(buffer[i]);
    }
    return (float)sum / count;
  }

  // O(N) RMS calculation over current window
  float rms() const {
    if (count == 0) return 0.0;
    long sumSquares = 0;
    for (int i = 0; i < count; i++) {
      long val = buffer[i];
      sumSquares += val * val;
    }
    return sqrt((float)sumSquares / count);
  }

  // O(N) variance calculation
  float variance() const {
    if (count < 2) return 0.0;
    float m = mean();
    float sumSquaredDiff = 0;
    for (int i = 0; i < count; i++) {
      float diff = buffer[i] - m;
      sumSquaredDiff += diff * diff;
    }
    return sumSquaredDiff / count;
  }

  // O(N) zero crossings count
  int zeroCrossings() const {
    if (count < 2) return 0;
    int zc = 0;
    for (int i = 1; i < count; i++) {
      if ((buffer[i-1] > 0 && buffer[i] <= 0) ||
          (buffer[i-1] <= 0 && buffer[i] > 0)) {
        zc++;
      }
    }
    return zc;
  }

  // O(N) waveform length
  long waveformLength() const {
    if (count < 2) return 0;
    long wl = 0;
    for (int i = 1; i < count; i++) {
      wl += abs(buffer[i] - buffer[i-1]);
    }
    return wl;
  }

  // Copy current window to array (for external processing)
  void copyTo(T* dest) const {
    for (int i = 0; i < count; i++) {
      dest[i] = get(i);
    }
  }
};

// Example usage for EMG processing
void exampleUsage() {
  // Create 100-sample buffer with mean tracking enabled
  CircularBuffer<int, 100> emgBuffer(true);

  const int EMG_PIN = A0;

  // Collect samples
  for (int i = 0; i < 100; i++) {
    int sample = analogRead(EMG_PIN);
    emgBuffer.push(sample);
    delay(2); // 500 Hz sampling
  }

  // Extract features when buffer is full
  if (emgBuffer.isFull()) {
    float mav = emgBuffer.mav();
    float rms = emgBuffer.rms();
    int zc = emgBuffer.zeroCrossings();
    long wl = emgBuffer.waveformLength();

    Serial.print("MAV: "); Serial.print(mav);
    Serial.print(", RMS: "); Serial.print(rms);
    Serial.print(", ZC: "); Serial.print(zc);
    Serial.print(", WL: "); Serial.println(wl);
  }
}

Key Features:

  • Template-based: Configurable data type and size at compile time
  • Zero dynamic allocation: All memory allocated at compile time (stack-based)
  • O(1) push and mean: Constant-time operations for real-time use
  • Complete feature set: MAV, RMS, variance, ZC, WL built-in
  • Memory footprint: SIZE × sizeof(T) + 12 bytes overhead

Memory Examples: - CircularBuffer<int, 100> = 200 bytes (100 samples × 2 bytes) - CircularBuffer<int, 200> = 400 bytes - Fits comfortably on Arduino Uno (2KB SRAM)

Exercise: Create a two-channel version for differential EMG measurements from antagonistic muscle pairs (e.g., biceps/triceps).

3. Threshold Classifier with Hysteresis

A robust classifier supporting multiple gesture states with hysteresis to prevent flickering at threshold boundaries. Includes calibration and state transition logic.

/*
 * Multi-State Threshold Classifier with Hysteresis
 * Supports 2-5 gesture classes with configurable thresholds
 */

class GestureClassifier {
private:
  // Configuration
  static const int MAX_STATES = 5;
  int numStates;
  int thresholds[MAX_STATES - 1];     // N states need N-1 thresholds
  int hysteresis;

  // State tracking
  int currentState;
  unsigned long lastTransitionTime;
  unsigned long minStateDuration;     // Minimum time in state (debouncing)

  // Calibration data
  bool isCalibrated;
  int restBaseline;
  int mvcPeak;

public:
  GestureClassifier(int states = 3, int hyst = 30, unsigned long minDuration = 100)
    : numStates(states), hysteresis(hyst), minStateDuration(minDuration),
      currentState(0), lastTransitionTime(0), isCalibrated(false),
      restBaseline(0), mvcPeak(1023) {

    // Initialize thresholds to evenly spaced values
    for (int i = 0; i < numStates - 1; i++) {
      thresholds[i] = (1023 * (i + 1)) / numStates;
    }
  }

  // Calibrate using rest and MVC measurements
  void calibrate(int rest, int mvc) {
    restBaseline = rest;
    mvcPeak = mvc;

    // Set thresholds as percentages of user's dynamic range
    int range = mvcPeak - restBaseline;

    if (numStates == 2) {
      // Binary: rest vs active
      thresholds[0] = restBaseline + range / 2;
    } else if (numStates == 3) {
      // Three states: rest, light, strong
      thresholds[0] = restBaseline + (range * 25) / 100;  // 25% for rest
      thresholds[1] = restBaseline + (range * 65) / 100;  // 65% for strong
    } else if (numStates == 4) {
      // Four states: rest, light, medium, strong
      thresholds[0] = restBaseline + (range * 20) / 100;
      thresholds[1] = restBaseline + (range * 45) / 100;
      thresholds[2] = restBaseline + (range * 70) / 100;
    }

    isCalibrated = true;
  }

  // Manually set thresholds (for testing or fixed systems)
  void setThresholds(int* newThresholds, int count) {
    for (int i = 0; i < min(count, numStates - 1); i++) {
      thresholds[i] = newThresholds[i];
    }
    isCalibrated = true;
  }

  // Classify a feature value with hysteresis
  int classify(int value) {
    if (!isCalibrated) {
      return 0; // Default to rest if not calibrated
    }

    unsigned long now = millis();

    // Enforce minimum state duration (debouncing)
    if (now - lastTransitionTime < minStateDuration) {
      return currentState;
    }

    int newState = currentState;

    // Determine new state based on thresholds and hysteresis
    if (currentState == 0) {
      // Currently in state 0 (rest)
      if (value > thresholds[0] + hysteresis) {
        newState = 1;
      }
    } else if (currentState == numStates - 1) {
      // Currently in highest state
      if (value < thresholds[numStates - 2] - hysteresis) {
        newState = numStates - 2;
      }
    } else {
      // Currently in middle state
      // Check for transition up
      if (value > thresholds[currentState] + hysteresis) {
        newState = currentState + 1;
      }
      // Check for transition down
      else if (value < thresholds[currentState - 1] - hysteresis) {
        newState = currentState - 1;
      }
    }

    // Update state if changed
    if (newState != currentState) {
      currentState = newState;
      lastTransitionTime = now;
    }

    return currentState;
  }

  // Get current state without updating
  int getState() const {
    return currentState;
  }

  // Reset to rest state
  void reset() {
    currentState = 0;
    lastTransitionTime = millis();
  }

  // Get threshold for debugging
  int getThreshold(int index) const {
    if (index < 0 || index >= numStates - 1) return -1;
    return thresholds[index];
  }

  // Check if value is near a threshold (useful for UI feedback)
  bool isNearThreshold(int value, int margin = 50) const {
    for (int i = 0; i < numStates - 1; i++) {
      if (abs(value - thresholds[i]) < margin) {
        return true;
      }
    }
    return false;
  }

  // Get state name (for debugging/display)
  const char* getStateName() const {
    static const char* names[] = {"Rest", "Light", "Medium", "Strong", "Maximum"};
    if (currentState >= 0 && currentState < MAX_STATES) {
      return names[currentState];
    }
    return "Unknown";
  }
};

// Example usage with EMG system
void exampleClassifierUsage() {
  GestureClassifier classifier(3, 30, 200);  // 3 states, 30 hysteresis, 200ms min duration

  // Simulate calibration measurements
  int restMeasurement = 120;      // Measured during rest
  int mvcMeasurement = 850;       // Measured during max contraction
  classifier.calibrate(restMeasurement, mvcMeasurement);

  Serial.println("Calibration complete:");
  Serial.print("Threshold 0->1: "); Serial.println(classifier.getThreshold(0));
  Serial.print("Threshold 1->2: "); Serial.println(classifier.getThreshold(1));

  // Process incoming EMG features
  int emgFeature = 450;  // Example MAV value
  int gesture = classifier.classify(emgFeature);

  Serial.print("Gesture: ");
  Serial.print(gesture);
  Serial.print(" (");
  Serial.print(classifier.getStateName());
  Serial.println(")");
}

Key Features:

  • Configurable states: 2-5 gesture classes with automatic threshold spacing
  • Bidirectional hysteresis: Prevents flickering during both up and down transitions
  • Temporal debouncing: Minimum state duration prevents rapid state changes
  • User calibration: Automatic threshold adaptation based on MVC measurements
  • Diagnostic tools: Threshold inspection and near-threshold detection

Exercise: Add a “confidence” metric that returns how far the current value is from the nearest threshold, useful for UI feedback.

4. Real-Time Feature Extraction

High-performance feature extraction using fixed-point arithmetic to avoid expensive floating-point operations on 8-bit microcontrollers. Computes MAV, RMS, ZC, and WL in real-time.

/*
 * Real-Time EMG Feature Extractor
 * Optimized for Arduino with integer arithmetic (no floating point)
 * Uses fixed-point math for speed on 8-bit MCUs
 */

class FeatureExtractor {
private:
  // Fixed-point scaling (multiply by 256 for 8-bit fractional precision)
  static const int SCALE = 256;
  static const int SCALE_BITS = 8;

public:
  // Calculate MAV using integer arithmetic
  // Returns value scaled by SCALE (divide by 256 for actual value)
  static long calculateMAV_int(int* window, int size) {
    long sum = 0;
    for (int i = 0; i < size; i++) {
      sum += abs(window[i]);
    }
    // Return average scaled by SCALE for precision
    return (sum * SCALE) / size;
  }

  // Calculate RMS using integer arithmetic
  // Returns value scaled by SCALE
  static long calculateRMS_int(int* window, int size) {
    long sumSquares = 0;
    for (int i = 0; i < size; i++) {
      long val = window[i];
      sumSquares += val * val;
    }
    long meanSquare = sumSquares / size;

    // Integer square root (faster than sqrt())
    return isqrt(meanSquare) * SCALE / 32;  // Scale adjustment for 10-bit ADC
  }

  // Calculate zero crossings (already integer)
  static int calculateZC(int* window, int size, int threshold = 5) {
    int zc = 0;
    for (int i = 1; i < size; i++) {
      // Only count crossings that exceed threshold (noise rejection)
      if (abs(window[i] - window[i-1]) > threshold) {
        if ((window[i-1] >= 0 && window[i] < 0) ||
            (window[i-1] < 0 && window[i] >= 0)) {
          zc++;
        }
      }
    }
    return zc;
  }

  // Calculate waveform length (already integer)
  static long calculateWL(int* window, int size) {
    long wl = 0;
    for (int i = 1; i < size; i++) {
      wl += abs(window[i] - window[i-1]);
    }
    return wl;
  }

  // Calculate variance using integer arithmetic
  // Returns value scaled by SCALE^2
  static long calculateVariance_int(int* window, int size) {
    // First pass: calculate mean
    long sum = 0;
    for (int i = 0; i < size; i++) {
      sum += window[i];
    }
    int mean = sum / size;

    // Second pass: calculate sum of squared differences
    long sumSquaredDiff = 0;
    for (int i = 0; i < size; i++) {
      int diff = window[i] - mean;
      sumSquaredDiff += (long)diff * diff;
    }

    return sumSquaredDiff / size;
  }

  // Extract all features efficiently (single pass where possible)
  struct Features {
    long mav;        // Mean Absolute Value (scaled by 256)
    long rms;        // Root Mean Square (scaled by 256)
    int zc;          // Zero Crossings (count)
    long wl;         // Waveform Length (sum)
    long variance;   // Variance (scaled by 256^2)
  };

  static Features extractAll(int* window, int size) {
    Features f;

    // Single pass for MAV, RMS, and variance calculation
    long sumAbs = 0;
    long sumSquares = 0;
    long sum = 0;

    for (int i = 0; i < size; i++) {
      int val = window[i];
      sum += val;
      sumAbs += abs(val);
      sumSquares += (long)val * val;
    }

    // MAV
    f.mav = (sumAbs * SCALE) / size;

    // RMS
    long meanSquare = sumSquares / size;
    f.rms = isqrt(meanSquare) * SCALE / 32;

    // Variance (requires mean)
    int mean = sum / size;
    long sumSquaredDiff = 0;
    for (int i = 0; i < size; i++) {
      int diff = window[i] - mean;
      sumSquaredDiff += (long)diff * diff;
    }
    f.variance = sumSquaredDiff / size;

    // ZC and WL (require sequential comparison)
    f.zc = calculateZC(window, size);
    f.wl = calculateWL(window, size);

    return f;
  }

  // Fast integer square root (Babylonian method)
  static long isqrt(long n) {
    if (n <= 0) return 0;

    long x = n;
    long y = (x + 1) / 2;

    while (y < x) {
      x = y;
      y = (x + n / x) / 2;
    }

    return x;
  }

  // Normalize features to 0-1000 range for classification
  static void normalizeFeatures(Features& f, int restMAV, int mvcMAV) {
    // Normalize MAV to percentage of dynamic range
    f.mav = constrain(map(f.mav / SCALE, restMAV, mvcMAV, 0, 1000), 0, 1000);
  }
};

// Complete real-time example with timing measurements
void realtimeFeatureExample() {
  const int WINDOW_SIZE = 100;
  int window[WINDOW_SIZE];

  // Simulate EMG data collection
  for (int i = 0; i < WINDOW_SIZE; i++) {
    window[i] = analogRead(A0);
    delayMicroseconds(2000); // 500 Hz
  }

  // Measure feature extraction time
  unsigned long startTime = micros();
  FeatureExtractor::Features features = FeatureExtractor::extractAll(window, WINDOW_SIZE);
  unsigned long extractionTime = micros() - startTime;

  // Output results
  Serial.println("=== Feature Extraction Results ===");
  Serial.print("MAV: "); Serial.println(features.mav / 256.0, 2);
  Serial.print("RMS: "); Serial.println(features.rms / 256.0, 2);
  Serial.print("ZC: "); Serial.println(features.zc);
  Serial.print("WL: "); Serial.println(features.wl);
  Serial.print("Variance: "); Serial.println(features.variance / 65536.0, 2);
  Serial.print("Extraction time: "); Serial.print(extractionTime); Serial.println(" us");

  // Check if real-time capable (should be <2ms for 500Hz sampling)
  if (extractionTime < 2000) {
    Serial.println("REAL-TIME CAPABLE at 500 Hz");
  } else {
    Serial.println("WARNING: Too slow for real-time at 500 Hz");
  }
}

// Feature vector for ML classifier (normalized 0-1000 range)
struct MLFeatureVector {
  int mav;
  int rms;
  int zc;
  int wl_norm;     // Normalized waveform length
  int variance_norm;

  // Serialize for TensorFlow Lite input
  void copyToArray(int* arr) {
    arr[0] = mav;
    arr[1] = rms;
    arr[2] = zc;
    arr[3] = wl_norm;
    arr[4] = variance_norm;
  }
};

// Create normalized feature vector for ML inference
MLFeatureVector createMLVector(int* window, int size, int restMAV, int mvcMAV) {
  FeatureExtractor::Features f = FeatureExtractor::extractAll(window, size);

  MLFeatureVector mlVec;

  // Normalize to 0-1000 range
  mlVec.mav = constrain(map(f.mav / 256, restMAV, mvcMAV, 0, 1000), 0, 1000);
  mlVec.rms = constrain(map(f.rms / 256, restMAV, mvcMAV, 0, 1000), 0, 1000);
  mlVec.zc = constrain(f.zc * 10, 0, 1000);  // Scale ZC (typically 0-100)
  mlVec.wl_norm = constrain(map(f.wl, 0, 10000, 0, 1000), 0, 1000);
  mlVec.variance_norm = constrain(map(f.variance / 256, 0, 5000, 0, 1000), 0, 1000);

  return mlVec;
}

Performance Characteristics:

  • Integer-only math: No floating point operations (2-10x faster on 8-bit MCUs)
  • Fixed-point scaling: 256× multiplier provides 8-bit fractional precision
  • Single-pass optimization: MAV + RMS + variance computed in one loop
  • Typical execution time: 800-1500 μs for 100 samples on Arduino Uno (16 MHz)
  • Memory footprint: Zero dynamic allocation, ~100 bytes stack usage

Real-World Performance: - Arduino Uno (16 MHz): ~1.2 ms for 100-sample window - Arduino Nano 33 BLE (64 MHz): ~0.3 ms for 100-sample window - Both support real-time 500 Hz sampling with overhead for classification

Exercise: Implement a sliding window feature extractor that updates features incrementally as new samples arrive, avoiding full recalculation each time.

Try It Yourself

These interactive Python examples demonstrate the core concepts of EMG signal processing. Run them to understand filtering, feature extraction, and classification before implementing on hardware.

EMG Signal Simulation and Filtering

Generate synthetic EMG signals and apply bandpass filtering to remove noise:

Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal

# Generate synthetic EMG signal
def generate_emg_signal(duration=2, fs=1000, noise_level=0.1):
    """
    Generate synthetic EMG signal with muscle activation patterns

    Parameters:
    - duration: signal length in seconds
    - fs: sampling frequency in Hz
    - noise_level: amplitude of added noise
    """
    t = np.linspace(0, duration, int(fs * duration))

    # Base EMG signal: sum of multiple frequency components (20-200 Hz)
    emg = np.zeros_like(t)
    for freq in [50, 80, 120, 150]:
        emg += np.sin(2 * np.pi * freq * t) * np.random.uniform(0.5, 1.5)

    # Add muscle activation bursts (higher amplitude periods)
    activation_mask = (t > 0.5) & (t < 1.0) | (t > 1.5) & (t < 1.8)
    emg[activation_mask] *= 3

    # Add noise
    emg += np.random.normal(0, noise_level, len(t))

    return t, emg

# Apply bandpass filter (20-500 Hz)
def bandpass_filter(signal_data, fs=1000, lowcut=20, highcut=500, order=4):
    """Apply Butterworth bandpass filter"""
    nyquist = fs / 2
    low = lowcut / nyquist
    high = highcut / nyquist
    b, a = signal.butter(order, [low, high], btype='band')
    filtered = signal.filtfilt(b, a, signal_data)
    return filtered

# Generate and filter signal
t, raw_emg = generate_emg_signal(duration=2, fs=1000)
filtered_emg = bandpass_filter(raw_emg, fs=1000)

# Full-wave rectification
rectified_emg = np.abs(filtered_emg)

# Envelope extraction (moving average)
window_size = 50
envelope = np.convolve(rectified_emg, np.ones(window_size)/window_size, mode='same')

# Visualize
fig, axes = plt.subplots(4, 1, figsize=(12, 10))

axes[0].plot(t, raw_emg, linewidth=0.5)
axes[0].set_title('1. Raw EMG Signal (with noise)', fontsize=12, fontweight='bold')
axes[0].set_ylabel('Amplitude (mV)')
axes[0].grid(True, alpha=0.3)

axes[1].plot(t, filtered_emg, linewidth=0.5, color='orange')
axes[1].set_title('2. Bandpass Filtered (20-500 Hz)', fontsize=12, fontweight='bold')
axes[1].set_ylabel('Amplitude (mV)')
axes[1].grid(True, alpha=0.3)

axes[2].plot(t, rectified_emg, linewidth=0.5, color='green')
axes[2].set_title('3. Full-Wave Rectified', fontsize=12, fontweight='bold')
axes[2].set_ylabel('Amplitude (mV)')
axes[2].grid(True, alpha=0.3)

axes[3].plot(t, envelope, linewidth=2, color='red')
axes[3].set_title('4. Envelope (Moving Average)', fontsize=12, fontweight='bold')
axes[3].set_xlabel('Time (seconds)')
axes[3].set_ylabel('Amplitude (mV)')
axes[3].grid(True, alpha=0.3)
axes[3].fill_between(t, envelope, alpha=0.3, color='red')

plt.tight_layout()
plt.show()

print(f"Signal duration: {t[-1]:.2f} seconds")
print(f"Sampling rate: {len(t) / t[-1]:.0f} Hz")
print(f"Raw signal range: [{raw_emg.min():.2f}, {raw_emg.max():.2f}]")
print(f"Envelope mean: {envelope.mean():.2f}, std: {envelope.std():.2f}")
ValueError: Digital filter critical frequencies must be 0 < Wn < 1

Feature Extraction (MAV, RMS, ZC, WL)

Compute time-domain features used for gesture classification:

Code
import numpy as np
import matplotlib.pyplot as plt

def extract_emg_features(window):
    """
    Extract time-domain features from EMG window

    Returns: dict with MAV, RMS, ZC, WL, Variance
    """
    # Mean Absolute Value
    mav = np.mean(np.abs(window))

    # Root Mean Square
    rms = np.sqrt(np.mean(window ** 2))

    # Zero Crossings (with threshold to avoid noise)
    threshold = 0.01
    zc = 0
    for i in range(len(window) - 1):
        if abs(window[i] - window[i+1]) > threshold:
            if (window[i] > 0 and window[i+1] < 0) or (window[i] < 0 and window[i+1] > 0):
                zc += 1

    # Waveform Length
    wl = np.sum(np.abs(np.diff(window)))

    # Variance
    variance = np.var(window)

    return {
        'MAV': mav,
        'RMS': rms,
        'ZC': zc,
        'WL': wl,
        'Variance': variance
    }

# Generate EMG signals for three gestures
np.random.seed(42)
fs = 500
duration = 3
t = np.linspace(0, duration, fs * duration)

# Gesture 1: Rest (low amplitude, low frequency)
rest = np.random.normal(0, 0.2, len(t))
for freq in [30, 50]:
    rest += 0.1 * np.sin(2 * np.pi * freq * t)

# Gesture 2: Light grip (medium amplitude)
light = np.random.normal(0, 0.3, len(t))
for freq in [50, 80, 100]:
    light += 0.5 * np.sin(2 * np.pi * freq * t)

# Gesture 3: Strong grip (high amplitude, high frequency)
strong = np.random.normal(0, 0.4, len(t))
for freq in [80, 120, 150, 180]:
    strong += 1.0 * np.sin(2 * np.pi * freq * t)

# Extract features using sliding windows
window_size = 100  # 200ms at 500Hz
stride = 25  # 50ms stride (75% overlap)

def sliding_window_features(signal_data, window_size, stride):
    """Extract features from sliding windows"""
    features = []
    for i in range(0, len(signal_data) - window_size, stride):
        window = signal_data[i:i+window_size]
        features.append(extract_emg_features(window))
    return features

# Extract features for all gestures
rest_features = sliding_window_features(rest, window_size, stride)
light_features = sliding_window_features(light, window_size, stride)
strong_features = sliding_window_features(strong, window_size, stride)

# Visualize feature distributions
fig, axes = plt.subplots(2, 3, figsize=(15, 8))
features_list = ['MAV', 'RMS', 'ZC', 'WL', 'Variance']

for idx, feature_name in enumerate(features_list):
    row = idx // 3
    col = idx % 3
    ax = axes[row, col]

    rest_vals = [f[feature_name] for f in rest_features]
    light_vals = [f[feature_name] for f in light_features]
    strong_vals = [f[feature_name] for f in strong_features]

    ax.hist(rest_vals, alpha=0.5, label='Rest', bins=20, color='blue')
    ax.hist(light_vals, alpha=0.5, label='Light Grip', bins=20, color='orange')
    ax.hist(strong_vals, alpha=0.5, label='Strong Grip', bins=20, color='red')

    ax.set_title(f'{feature_name} Distribution', fontweight='bold')
    ax.set_xlabel('Feature Value')
    ax.set_ylabel('Frequency')
    ax.legend()
    ax.grid(True, alpha=0.3)

# Remove extra subplot
axes[1, 2].remove()

plt.tight_layout()
plt.show()

# Print summary statistics
print("Feature Statistics by Gesture:\n")
for gesture_name, features in [('Rest', rest_features),
                                ('Light Grip', light_features),
                                ('Strong Grip', strong_features)]:
    print(f"\n{gesture_name}:")
    for feature_name in features_list:
        vals = [f[feature_name] for f in features]
        print(f"  {feature_name:12s}: mean={np.mean(vals):8.2f}, std={np.std(vals):7.2f}")

Feature Statistics by Gesture:


Rest:
  MAV         : mean=    0.18, std=   0.01
  RMS         : mean=    0.22, std=   0.01
  ZC          : mean=   41.96, std=   3.75
  WL          : mean=   22.72, std=   1.64
  Variance    : mean=    0.05, std=   0.01

Light Grip:
  MAV         : mean=    0.56, std=   0.03
  RMS         : mean=    0.68, std=   0.03
  ZC          : mean=   34.54, std=   2.73
  WL          : mean=   57.38, std=   2.94
  Variance    : mean=    0.47, std=   0.04

Strong Grip:
  MAV         : mean=    1.17, std=   0.08
  RMS         : mean=    1.47, std=   0.05
  ZC          : mean=   47.50, std=   2.41
  WL          : mean=  164.36, std=   5.53
  Variance    : mean=    2.15, std=   0.14

Threshold Classification with Hysteresis

Implement gesture classification with hysteresis to prevent flickering:

Code
import numpy as np
import matplotlib.pyplot as plt

class HysteresisClassifier:
    """
    Multi-state threshold classifier with hysteresis
    """
    def __init__(self, thresholds, hysteresis=0.1, state_names=None):
        """
        Parameters:
        - thresholds: list of threshold values (N-1 for N states)
        - hysteresis: margin around thresholds
        - state_names: optional names for states
        """
        self.thresholds = np.array(thresholds)
        self.hysteresis = hysteresis
        self.num_states = len(thresholds) + 1
        self.current_state = 0
        self.state_names = state_names or [f"State {i}" for i in range(self.num_states)]
        self.history = []

    def classify(self, value):
        """Classify value with hysteresis"""
        # Check for state transitions
        if self.current_state == 0:
            # In lowest state, can only go up
            if value > self.thresholds[0] + self.hysteresis:
                self.current_state = 1
        elif self.current_state == self.num_states - 1:
            # In highest state, can only go down
            if value < self.thresholds[-1] - self.hysteresis:
                self.current_state = self.num_states - 2
        else:
            # In middle state, can go up or down
            if value > self.thresholds[self.current_state] + self.hysteresis:
                self.current_state += 1
            elif value < self.thresholds[self.current_state - 1] - self.hysteresis:
                self.current_state -= 1

        self.history.append(self.current_state)
        return self.current_state

    def get_state_name(self):
        return self.state_names[self.current_state]

# Simulate EMG feature values with transitions
np.random.seed(42)
time = np.linspace(0, 20, 1000)
feature_values = np.zeros_like(time)

# Create realistic feature trajectory with transitions
for i, t in enumerate(time):
    if t < 3:
        feature_values[i] = np.random.normal(0.2, 0.05)  # Rest
    elif t < 5:
        feature_values[i] = np.random.normal(0.5, 0.08)  # Transition to light
    elif t < 8:
        feature_values[i] = np.random.normal(0.5, 0.08)  # Light grip
    elif t < 10:
        feature_values[i] = np.random.normal(0.8, 0.1)   # Transition to strong
    elif t < 14:
        feature_values[i] = np.random.normal(0.85, 0.1)  # Strong grip
    elif t < 16:
        feature_values[i] = np.random.normal(0.5, 0.08)  # Back to light
    else:
        feature_values[i] = np.random.normal(0.2, 0.05)  # Back to rest

# Compare no hysteresis vs with hysteresis
thresholds = [0.35, 0.7]  # Two thresholds for three states
state_names = ['Rest', 'Light Grip', 'Strong Grip']

# Classifier without hysteresis
classifier_none = HysteresisClassifier(thresholds, hysteresis=0.0, state_names=state_names)
states_none = [classifier_none.classify(val) for val in feature_values]

# Classifier with hysteresis
classifier_hyst = HysteresisClassifier(thresholds, hysteresis=0.08, state_names=state_names)
states_hyst = [classifier_hyst.classify(val) for val in feature_values]

# Visualize
fig, axes = plt.subplots(3, 1, figsize=(14, 10))

# Feature values with thresholds
axes[0].plot(time, feature_values, linewidth=1, label='MAV Feature', color='black')
axes[0].axhline(y=thresholds[0], color='orange', linestyle='--', label='Threshold 1', linewidth=2)
axes[0].axhline(y=thresholds[1], color='red', linestyle='--', label='Threshold 2', linewidth=2)
axes[0].fill_between(time, thresholds[0] - 0.08, thresholds[0] + 0.08,
                      alpha=0.2, color='orange', label='Hysteresis band')
axes[0].fill_between(time, thresholds[1] - 0.08, thresholds[1] + 0.08,
                      alpha=0.2, color='red')
axes[0].set_title('EMG Feature Values and Classification Thresholds', fontsize=12, fontweight='bold')
axes[0].set_ylabel('Feature Value (MAV)')
axes[0].legend(loc='upper right')
axes[0].grid(True, alpha=0.3)

# States without hysteresis
axes[1].plot(time, states_none, linewidth=2, drawstyle='steps-post', color='blue')
axes[1].set_title('Classification WITHOUT Hysteresis (Many Flickering State Changes)',
                  fontsize=12, fontweight='bold')
axes[1].set_ylabel('State')
axes[1].set_yticks([0, 1, 2])
axes[1].set_yticklabels(state_names)
axes[1].grid(True, alpha=0.3)

# Count state changes
changes_none = np.sum(np.diff(states_none) != 0)
axes[1].text(0.02, 0.95, f'State changes: {changes_none}',
             transform=axes[1].transAxes, fontsize=10,
             verticalalignment='top', bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))

# States with hysteresis
axes[2].plot(time, states_hyst, linewidth=2, drawstyle='steps-post', color='green')
axes[2].set_title('Classification WITH Hysteresis (Stable State Transitions)',
                  fontsize=12, fontweight='bold')
axes[2].set_xlabel('Time (seconds)')
axes[2].set_ylabel('State')
axes[2].set_yticks([0, 1, 2])
axes[2].set_yticklabels(state_names)
axes[2].grid(True, alpha=0.3)

# Count state changes
changes_hyst = np.sum(np.diff(states_hyst) != 0)
axes[2].text(0.02, 0.95, f'State changes: {changes_hyst}',
             transform=axes[2].transAxes, fontsize=10,
             verticalalignment='top', bbox=dict(boxstyle='round', facecolor='lightgreen', alpha=0.5))

plt.tight_layout()
plt.show()

print(f"\nHysteresis Effectiveness:")
print(f"  Without hysteresis: {changes_none} state changes")
print(f"  With hysteresis:    {changes_hyst} state changes")
print(f"  Reduction:          {(1 - changes_hyst/changes_none)*100:.1f}%")


Hysteresis Effectiveness:
  Without hysteresis: 80 state changes
  With hysteresis:    20 state changes
  Reduction:          75.0%

Practical Exercises

These hands-on exercises progressively build a complete EMG gesture control system. Each exercise includes testing criteria and expected outcomes.

Objective: Verify EMG sensor setup and signal quality before implementing complex processing.

Task: 1. Upload the basic sketch below to read raw EMG 2. Open Serial Plotter and observe signal during rest vs contraction 3. Calculate SNR (Signal-to-Noise Ratio) manually

const int EMG_PIN = A0;

void setup() {
  Serial.begin(115200);
  pinMode(EMG_PIN, INPUT);
}

void loop() {
  int raw = analogRead(EMG_PIN);
  Serial.println(raw);
  delay(2); // 500 Hz
}

Success Criteria: - Rest signal: 50-200 ADC units (low noise floor) - Contraction signal: 400-900 ADC units (clear activation) - SNR > 6 dB (signal at least 2× noise) - No 50/60 Hz oscillation visible in plot

Troubleshooting: - If signal is flat or always ~512: Check sensor power and connections - If massive noise: Check electrode contact, clean skin with alcohol - If 60 Hz oscillation: Move away from power supplies, check grounding

Deliverable: Screenshot of Serial Plotter showing clean rest/contraction transitions.

Objective: Implement and validate user-specific calibration.

Task: 1. Use the calibration code from the complete sketch above 2. Test with 3 different users (or 3 different muscle groups) 3. Record rest baseline, MVC peak, and calculated thresholds 4. Verify thresholds work for each user

Data Collection Template:

User 1:
  Rest baseline: ___ ADC units
  MVC peak: ___ ADC units
  Dynamic range: ___ ADC units
  Rest threshold (20%): ___ ADC units
  Active threshold (60%): ___ ADC units

User 2: [repeat]
User 3: [repeat]

Success Criteria: - Dynamic range varies >3× between users - Auto-calculated thresholds correctly separate rest/light/strong for each user - System responds to all three users without manual threshold adjustment

Deliverable: Calibration data table and analysis of inter-user variability.

Objective: Determine optimal hysteresis value to prevent flickering.

Task: 1. Modify the classifier to accept hysteresis as a parameter 2. Test with values: 0, 10, 30, 50, 100 ADC units 3. For each value, hold muscle at threshold and count state changes per 10 seconds 4. Find minimum hysteresis that keeps changes <2 per 10 seconds

Test Protocol:

// Add to classifier
void testHysteresis(int hystValue) {
  hysteresis = hystValue;
  int changes = 0;
  unsigned long start = millis();
  int lastState = currentState;

  while (millis() - start < 10000) {
    int val = analogRead(EMG_PIN);
    int state = classify(val);
    if (state != lastState) {
      changes++;
      lastState = state;
    }
    delay(10);
  }

  Serial.print("Hysteresis: "); Serial.print(hystValue);
  Serial.print(", Changes: "); Serial.println(changes);
}

Success Criteria: - Hysteresis = 0: Many state changes (10+) - Optimal hysteresis: <2 changes per 10 seconds - Too much hysteresis: Delayed response to intentional gestures

Deliverable: Graph of state changes vs hysteresis value, with recommended setting.

Objective: Measure execution time of each processing stage.

Task: 1. Add timing measurements to each function 2. Process 1000 samples and calculate average time per operation 3. Determine if system can support 500 Hz real-time processing

Timing Code:

void benchmarkProcessing() {
  const int ITERATIONS = 1000;

  // Benchmark moving average
  unsigned long maStart = micros();
  for (int i = 0; i < ITERATIONS; i++) {
    applyMovingAverage(analogRead(A0));
  }
  unsigned long maTime = (micros() - maStart) / ITERATIONS;

  // Benchmark feature extraction
  int window[100];
  for (int i = 0; i < 100; i++) window[i] = random(1024);

  unsigned long feStart = micros();
  for (int i = 0; i < ITERATIONS; i++) {
    FeatureExtractor::extractAll(window, 100);
  }
  unsigned long feTime = (micros() - feStart) / ITERATIONS;

  Serial.println("=== Performance Benchmark ===");
  Serial.print("Moving Average: "); Serial.print(maTime); Serial.println(" us");
  Serial.print("Feature Extraction: "); Serial.print(feTime); Serial.println(" us");
  Serial.print("Total per sample: "); Serial.print(maTime + feTime/100); Serial.println(" us");
  Serial.print("Max sample rate: "); Serial.print(1000000 / (maTime + feTime/100)); Serial.println(" Hz");
}

Success Criteria: - Moving average: <50 μs per sample - Feature extraction: <2000 μs per 100-sample window - Total processing: Supports >500 Hz sampling - Verify no buffer overruns during continuous operation

Deliverable: Performance report with timing breakdown and max achievable sample rate.

PDF Cross-References

  • Section 2: EMG signal acquisition and electrode placement (pages 3-6)
  • Section 3: Signal processing pipeline detailed implementation (pages 7-11)
  • Section 4: Feature extraction formulas and code (pages 12-15)
  • Section 5: Classification methods and calibration (pages 16-19)
  • Section 6: Actuator control and servo integration (pages 20-22)
  • Section 7: Complete EMG control system example (pages 23-26)
  • Troubleshooting: Common EMG issues and solutions (pages 27-29)

Self-Assessment Checkpoints

Test your understanding before proceeding to the exercises.

Answer: MAV = (1/N) × Σ|x_i| = (1/5) × (|20| + |-15| + |30| + |-10| + |25|) = (1/5) × (20 + 15 + 30 + 10 + 25) = (1/5) × 100 = 20. MAV measures mean muscle activation intensity over the window. Higher MAV indicates stronger muscle contraction. Note: We take absolute values before averaging because EMG is bipolar (positive and negative voltages), and we care about activation magnitude, not direction.

Answer: EMG signal amplitude varies dramatically due to: (1) Muscle mass and strength differences, (2) Skin impedance (dry skin, hair, sweat), (3) Electrode placement consistency (1cm shift can halve amplitude), (4) Fat layer thickness between muscle and skin. A threshold that works for one person completely fails for another. Solution: Maximum Voluntary Contraction (MVC) calibration. At setup, ask the user to: (1) Relax completely (record rest baseline), (2) Contract maximally (record MVC peak), (3) Set thresholds at 20% and 80% of their personal range. This normalizes signals across individuals. Production systems must implement per-user calibration.

Answer: Window duration = samples / sample_rate = 100 samples / 500 Hz = 0.2 seconds = 200ms. Output rate = sample_rate / stride = 500 Hz / 50 samples = 10 Hz (one feature vector every 100ms). The 50% overlap (stride = half of window size) provides balance: enough temporal continuity to catch transitions but not so much overlap that we waste computation. This gives 10 Hz gesture updates, sufficient for responsive prosthetic control (<100ms latency).

Answer: Electrode gel drying increases contact impedance by 10-100×, dramatically reducing signal amplitude and introducing noise. Fresh electrodes have ~5kΩ impedance; dried electrodes can exceed 100kΩ. Solutions: (1) Use wet gel electrodes and replace every 4-8 hours for continuous monitoring, (2) Re-hydrate dry electrodes with saline spray or electrode gel, (3) Implement adaptive thresholds that track signal statistics over time and adjust, (4) Monitor impedance using the sensor’s built-in impedance check (if available) and alert user to replace electrodes, (5) Clean skin thoroughly with alcohol before application to remove oils and improve initial contact. Professional systems use reusable electrodes with conductive gel for extended wear.

Answer: Windows <100ms (e.g., 50 samples at 500 Hz) contain too few samples for stable statistics. With only 50 values, mean and standard deviation fluctuate wildly due to noise, making features unreliable for classification. The Central Limit Theorem requires sufficient samples for statistics to converge—typically 20-50 minimum. Additionally, EMG muscle activation events span 100-300ms physiologically, so shorter windows miss the complete activation pattern. Conversely, windows >300ms add latency making control feel sluggish. The sweet spot: 100-200ms windows provide stable features with acceptable responsiveness for real-time prosthetic or gesture control.

Interactive Notebook

The notebook below contains runnable code for all Level 1 activities.

LAB 10: Biomedical Signal Processing - EMG-based Control (Optional)

Open In Colab View on GitHub


Property Value
Book Chapter Chapter 10
Execution Levels Level 1 (Notebook) | Level 3 (Device with EMG sensor)
Estimated Time 45 minutes
Prerequisites LAB 2, LAB 8

Learning Objectives

  1. Understand EMG signals and their characteristics
  2. Process biomedical signals with filtering and feature extraction
  3. Build gesture classifiers for muscle activity
  4. Apply signal processing techniques for edge devices

📚 Theory: EMG Physiology and Signal Generation

Before processing EMG signals, we must understand how muscles generate electrical activity.

Motor Unit Anatomy

A motor unit is the fundamental unit of muscle contraction:

Spinal Cord                      Muscle Fibers
     │
     │ Motor Neuron
     │ (Axon)
     │
     └───┬───┬───┬───┐
         │   │   │   │    Neuromuscular
         ▼   ▼   ▼   ▼    Junctions
        ═══ ═══ ═══ ═══   Muscle Fibers
        ═══ ═══ ═══ ═══   (same motor unit)
        
One motor neuron → Multiple muscle fibers (10-2000)

Motor Unit Action Potential (MUAP)

When a motor neuron fires, all its muscle fibers contract simultaneously:

Single MUAP Waveform:
        │
        │     ╱╲
    mV  │    ╱  ╲
        │   ╱    ╲
     0  │──╱──────╲──╱──
        │          ╲╱
        │
        └──────────────── time (ms)
            1-10 ms

Amplitude: 100 μV - 2 mV (surface EMG)
Duration: 2-10 ms

EMG Signal as Superposition

The recorded EMG is the sum of many MUAPs from multiple motor units:

\(EMG(t) = \sum_{i=1}^{N} \sum_{j=1}^{M_i} MUAP_{i,j}(t - \tau_{i,j})\)

Where: - \(N\) = number of active motor units - \(M_i\) = number of firings from motor unit \(i\) - \(\tau_{i,j}\) = firing time

Force-EMG Relationship

Muscle force increases through two mechanisms:

  1. Recruitment: Activating more motor units
  2. Rate coding: Increasing firing frequency
EMG Amplitude vs. Force:

EMG │           ●●●●
Amp │        ●●●
    │      ●●
    │    ●●
    │  ●●
    │●●
    └────────────────
          Force (%MVC)
          
Generally: EMG ∝ √Force (non-linear)

Section 1: Understanding EMG Signals

EMG (Electromyography) measures electrical activity in muscles: - Amplitude: 0-10 mV (surface EMG) - Frequency range: 20-500 Hz (useful: 50-150 Hz) - Sampling rate: typically 1000 Hz

Common Noise Sources

  • 50/60 Hz powerline interference
  • Motion artifacts (low frequency)
  • Electrode noise

📚 Theory: EMG Signal Characteristics

Frequency Content

Surface EMG has a characteristic frequency distribution:

Power Spectral Density of EMG:

Power │
      │         ╱╲
      │        ╱  ╲
      │       ╱    ╲
      │      ╱      ╲
      │     ╱        ╲
      │    ╱          ╲____
      │___╱                 ╲____
      └──────────────────────────────
         20   50   100  150  200  500  Hz
              │     │     │
              │     │     └── Upper useful limit
              │     └──────── Peak power region
              └────────────── Lower useful limit

Noise Sources and Frequencies

Noise Type Frequency Amplitude Mitigation
Powerline 50/60 Hz 1-10 mV Notch filter
Motion artifact 0-20 Hz 0-10 mV High-pass filter
Electrode contact DC-10 Hz Variable High-pass filter
EMI > 500 Hz Low Low-pass filter

Amplitude Characteristics

Muscle Type Typical Amplitude
Small (finger) 50-300 μV
Medium (forearm) 200-1000 μV
Large (bicep) 500-5000 μV

Sampling Requirements

By Nyquist theorem:

\(f_s \geq 2 \cdot f_{max} = 2 \cdot 500\text{ Hz} = 1000\text{ Hz}\)

Practical choice: 1000-2000 Hz for surface EMG.

ADC Resolution

For MCU-based acquisition:

\(\text{Resolution} = \frac{V_{range}}{2^{bits}}\)

ADC Bits Arduino Range Resolution
10-bit 0-5V 4.9 mV
12-bit 0-3.3V 0.8 mV

Important: EMG signals (μV-mV) require amplification before ADC!


Section 2: Signal Filtering

📚 Theory: Digital Filter Design for EMG

Butterworth Filter Design

The Butterworth filter has maximally flat passband response:

\(|H(j\omega)|^2 = \frac{1}{1 + \left(\frac{\omega}{\omega_c}\right)^{2n}}\)

Where: - \(\omega_c\) = cutoff frequency - \(n\) = filter order

Butterworth Frequency Response:

Gain │────────────╲
(dB) │            │╲
  0  │────────────│─╲───────────
     │            │  ╲ n=2
 -3  │............│...●.........
     │            │    ╲
-20  │            │     ╲ n=4
     │            │      ╲
     └────────────┴───────────────
                  fc    Frequency
     
-3 dB point is at cutoff frequency
Higher order = steeper rolloff

Bandpass Filter for EMG

We want to keep frequencies between 20-450 Hz:

Bandpass Filter Response:

Gain │
     │         ┌──────────┐
  0  │─────────│          │──────────
     │        ╱│          │╲
     │       ╱ │          │ ╲
     │      ╱  │          │  ╲
     │     ╱   │          │   ╲
     │    ╱    │          │    ╲
     └─────────┴──────────┴────────────
             20 Hz       450 Hz
            (Motion     (EMI
             artifacts)  rejection)

Notch Filter for Powerline

The notch (band-reject) filter removes a narrow frequency band:

\(H(s) = \frac{s^2 + \omega_0^2}{s^2 + \frac{\omega_0}{Q}s + \omega_0^2}\)

Where: - \(\omega_0\) = center frequency (50 or 60 Hz) - \(Q\) = quality factor (higher = narrower notch)

Notch Filter Response:

Gain │──────╱╲──────
     │     ╱  ╲
  0  │────╱    ╲─────
     │   │      │
     │   │      │
     │   │  ●   │ ← Target: -40 to -60 dB
     │   │      │
     └───┴──────┴─────
        50 Hz

Filter Implementation Considerations

Filter Type Pros Cons MCU Suitability
FIR Linear phase, stable High order needed Medium
IIR Butterworth Low order, smooth Phase distortion Good
IIR Chebyshev Sharp cutoff Passband ripple Good

For real-time EMG on MCU: Use low-order (2-4) IIR Butterworth filters.


💡 Alternative Approaches: EMG Filtering

Option A: Butterworth Bandpass + Notch (Current approach) - Pros: Smooth frequency response, removes DC and powerline noise - Cons: Requires scipy, phase distortion if not using filtfilt - Order: 4th order bandpass + notch at 50/60 Hz

Option B: FIR Bandpass Filter - Pros: Linear phase (no distortion), always stable - Cons: Higher computational cost, requires more coefficients - Code: b = signal.firwin(101, [20, 450], fs=1000, pass_zero=False)

Option C: Wavelet Denoising - Pros: Preserves sharp features (bursts), adaptive thresholding - Cons: Complex, not real-time friendly for MCU - Use case: Offline analysis, research applications - Code: pywt.wavedec() + soft thresholding + pywt.waverec()

Option D: Adaptive Filter (LMS) - Pros: Tracks and removes time-varying noise - Cons: Requires reference signal, more complex - Use case: Removing ECG artifacts from EMG

When to use each: - Use Option A (current) for real-time MCU implementation - Use Option B for offline analysis where phase is critical - Use Option C for research-grade denoising - Use Option D when noise is structured (ECG, motion artifacts)

🔬 Try It Yourself: Filter Parameters

Experiment with filter parameters and observe effects on signal quality:

Parameter Current Try These Expected Effect
lowcut 20 Hz 10, 50 Hz Lower = keeps more DC drift, higher = removes slow movements
highcut 450 Hz 250, 500 Hz Lower = more smoothing, higher = keeps high-freq noise
order 4 2, 6, 8 Higher = sharper cutoff but more computation
Q (notch) 30 10, 60 Higher = narrower notch (removes less around 50 Hz)

Experiment 1: Compare filter orders

for order in [2, 4, 6]:
    b, a = signal.butter(order, [20/500, 450/500], btype='band')
    filtered = signal.filtfilt(b, a, emg)
    plt.plot(filtered, label=f'Order {order}')

What to observe: Higher order = sharper cutoff, but potential instability

Experiment 2: Notch filter quality factor

for Q in [10, 30, 60]:
    b, a = signal.iirnotch(50/500, Q)
    filtered = signal.filtfilt(b, a, emg)
    # Compute PSD and check 50 Hz suppression

What to observe: Higher Q = narrower notch (better preserves 48 Hz and 52 Hz)

Section 3: Feature Extraction

📚 Theory: EMG Feature Extraction

Features compress raw EMG into meaningful values for classification.

Time Domain Features

1. Mean Absolute Value (MAV)

\(MAV = \frac{1}{N} \sum_{i=1}^{N} |x_i|\)

Simple amplitude measure, commonly used for gesture detection.

2. Root Mean Square (RMS)

\(RMS = \sqrt{\frac{1}{N} \sum_{i=1}^{N} x_i^2}\)

Represents signal power; correlates with muscle force.

3. Variance (VAR)

\(VAR = \frac{1}{N-1} \sum_{i=1}^{N} (x_i - \bar{x})^2\)

Measures signal variability.

4. Waveform Length (WL)

\(WL = \sum_{i=1}^{N-1} |x_{i+1} - x_i|\)

Cumulative length of waveform; indicates complexity.

5. Zero Crossing Rate (ZC)

\(ZC = \sum_{i=1}^{N-1} \mathbf{1}[x_i \cdot x_{i+1} < 0]\)

Number of times signal crosses zero; crude frequency estimate.

Frequency Domain Features

6. Mean Frequency (MNF)

\(MNF = \frac{\sum_{j=1}^{M} f_j \cdot P_j}{\sum_{j=1}^{M} P_j}\)

Center of mass of the power spectrum.

7. Median Frequency (MDF)

\(\sum_{j=1}^{MDF} P_j = \sum_{j=MDF}^{M} P_j = \frac{1}{2} \sum_{j=1}^{M} P_j\)

Frequency that divides spectrum into equal power halves.

Feature Behavior During Fatigue

EMG Feature Changes During Muscle Fatigue:

Feature │                        
        │  MAV ╱╱╱╱╱              ← Increases (compensation)
        │     ╱                   
        │────────────────────────
        │
        │  MNF ╲╲╲╲╲              ← Decreases (fiber fatigue)
        │       ╲                 
        └──────────────────────────
          Start            Fatigue
                  Time →

Feature Selection for Edge Devices

Feature Computation Memory Usefulness
MAV O(N) O(1) High
RMS O(N) O(1) High
VAR O(N) O(2) Medium
WL O(N) O(1) Medium
ZC O(N) O(1) Medium
MNF/MDF O(N log N) O(N) High (but FFT needed)

For MCU: Prefer time-domain features (no FFT required).


Section 4: Simple Gesture Classifier

📚 Theory: EMG Gesture Classification

Pattern Recognition Pipeline

Raw EMG → Filtering → Segmentation → Feature     → Classification → Gesture
                         ↓          Extraction        ↓
                    [─────────]    [MAV, RMS,    [Threshold /
                     Window        WL, ZC]       ML Model]
                   (100-250ms)

Window Size Selection

Trade-off between response time and accuracy:

Window Size Response Time Accuracy Use Case
50-100 ms Fast Lower Real-time control
150-250 ms Medium Good Gesture recognition
300-500 ms Slow High Clinical analysis

Classification Approaches

1. Threshold-Based (Simplest)

if MAV < threshold_rest:
    gesture = REST
elif MAV < threshold_light:
    gesture = LIGHT_GRIP
else:
    gesture = STRONG_GRIP

Pro: Simple, fast, low memory. Con: Not robust to variations.

2. Machine Learning Models

Model MCU Suitability Accuracy Memory
Decision Tree Excellent Medium Low
Random Forest Good High Medium
SVM Medium High Medium
Neural Network Challenging Highest High

Decision Boundary Visualization

RMS │
    │                    ●●● Strong Grip
    │                  ●●●●●
    │            ─────────────────
    │         ▲▲▲▲    Light Grip
    │       ▲▲▲▲▲▲▲
    │   ──────────────────────────
    │  ■■■■          Rest
    │ ■■■■■
    └────────────────────────────────
                                  MAV

Real-Time Processing on MCU

Timing Budget (100 Hz classification):

Total available: 10 ms per classification

├── ADC sampling:      0.1 ms
├── Filter (IIR):      0.2 ms
├── Feature extraction: 0.5 ms
├── Classification:    0.2 ms
└── Buffer available:  9.0 ms

💡 Alternative Approaches: Feature Extraction

Option A: Time-Domain Features (Current approach) - Pros: Fast, no FFT needed, perfect for MCU - Cons: May miss frequency-domain patterns - Features: MAV, RMS, WL, ZC, VAR (5 features)

Option B: Frequency-Domain Features - Pros: Captures fatigue indicators (median frequency shift) - Cons: Requires FFT (expensive on MCU), more memory - Features: MNF, MDF, peak frequency, spectral entropy

Option C: Time-Frequency Features (Wavelets) - Pros: Best for non-stationary signals, localized in time and frequency - Cons: Very computationally expensive, complex - Use case: Research-grade analysis, offline processing

Option D: Autoregressive (AR) Model Coefficients - Pros: Compact representation, captures temporal dynamics - Cons: Requires model order selection, less interpretable - Features: AR(4) coefficients using Yule-Walker

When to use each: - Use Option A (time-domain) for embedded devices (Arduino, ESP32) - Use Option B for desktop/Pi applications with muscle fatigue monitoring - Use Option C for research papers analyzing complex gestures - Use Option D for prosthetic control with temporal patterns

🔬 Try It Yourself: Feature Selection

Compare classifier performance with different feature sets:

Feature Set Features Accuracy MCU Suitability
Minimal MAV, RMS 70-80% Excellent
Standard MAV, RMS, WL, ZC 85-90% Good
Full TD All 7 time-domain 90-95% Good
With Freq TD + MNF, MDF 92-97% Medium

Experiment: Feature importance ranking

# After training Random Forest
importances = clf.feature_importances_
for name, imp in sorted(zip(feature_names, importances), key=lambda x: -x[1]):
    print(f'{name}: {imp:.3f}')

What to observe: RMS and MAV typically most important for amplitude-based gestures

📚 Summary: EMG Processing Pipeline

Complete Pipeline

┌──────────────┐    ┌──────────────┐    ┌──────────────┐    ┌──────────────┐
│   Sensor     │ → │   Filtering  │ → │   Features   │ → │  Classifier  │
│  (MyoWare)   │    │ BP + Notch   │    │  Extraction  │    │  (RF/Tree)   │
└──────────────┘    └──────────────┘    └──────────────┘    └──────────────┘
    Analog          Digital (1 kHz)      7 features         3 gestures
  0-5V range         0-450 Hz           per window

Key Formulas

Feature Formula Use
MAV \(\frac{1}{N}\sum|x_i|\) Amplitude
RMS \(\sqrt{\frac{1}{N}\sum x_i^2}\) Power/Force
WL \(\sum|x_{i+1}-x_i|\) Complexity
ZC Count sign changes Frequency
MNF \(\frac{\sum f_j P_j}{\sum P_j}\) Fatigue

MCU Implementation Checklist

□ Hardware
  ├── □ EMG sensor (MyoWare, AD8232)
  ├── □ Amplification (gain ~1000x)
  └── □ ADC at 1 kHz sampling

□ Signal Processing
  ├── □ Bandpass filter (20-450 Hz)
  ├── □ Notch filter (50/60 Hz)
  └── □ Fixed-point implementation

□ Classification
  ├── □ Window buffer (200-500 samples)
  ├── □ Feature extraction (time-domain preferred)
  └── □ Decision tree or threshold-based

Resource Requirements

Component Flash RAM
Filter coefficients ~100 B ~50 B
Feature extraction ~500 B ~1 KB
Decision tree (depth 5) ~1 KB ~50 B
Sample buffer (500 samples) - 1-2 KB
Total ~2 KB ~3 KB

Section 5: Arduino Code Reference

// EMG Processing on Arduino
#define EMG_PIN A0
#define THRESHOLD_LIGHT 200
#define THRESHOLD_STRONG 400
#define BUFFER_SIZE 100

float emgBuffer[BUFFER_SIZE];
int bufferIndex = 0;

// Calculate Mean Absolute Value (most useful for MCU)
float calculateMAV() {
    float sum = 0;
    for (int i = 0; i < BUFFER_SIZE; i++) {
        sum += abs(emgBuffer[i]);
    }
    return sum / BUFFER_SIZE;
}

// Calculate RMS (Root Mean Square)
float calculateRMS() {
    float sum = 0;
    for (int i = 0; i < BUFFER_SIZE; i++) {
        sum += emgBuffer[i] * emgBuffer[i];
    }
    return sqrt(sum / BUFFER_SIZE);
}

void loop() {
    // Read EMG (assumes pre-amplified signal)
    int raw = analogRead(EMG_PIN);
    emgBuffer[bufferIndex] = raw - 512;  // Center around zero
    bufferIndex = (bufferIndex + 1) % BUFFER_SIZE;
    
    // Classify every 100ms
    static unsigned long lastClassify = 0;
    if (millis() - lastClassify > 100) {
        float mav = calculateMAV();
        
        if (mav < THRESHOLD_LIGHT) {
            Serial.println("REST");
        } else if (mav < THRESHOLD_STRONG) {
            Serial.println("LIGHT_GRIP");
        } else {
            Serial.println("STRONG_GRIP");
        }
        lastClassify = millis();
    }
    
    delay(1);  // ~1 kHz sampling
}

Checkpoint Questions

  1. What is the useful frequency range for surface EMG?
    • Answer: 20-450 Hz, with most power in 50-150 Hz
  2. Write the formula for RMS:
    • Answer: \(RMS = \sqrt{\frac{1}{N}\sum x_i^2}\)
  3. Why do we use a notch filter at 50/60 Hz?
    • Answer: To remove powerline interference
  4. Which features are best for MCU implementation and why?
    • Answer: Time-domain (MAV, RMS, WL) - no FFT needed

Part of the Edge Analytics Lab Book

⚠️ Common Issues and Debugging

If EMG signal is too noisy: - Check: Are electrodes properly positioned? → Clean skin with alcohol, use conductive gel - Check: Is ground electrode attached? → Must have reference electrode on bony area - Check: Are wires shielded? → Use shielded cables, twist pairs, keep away from power cables - Check: Is gain too high? → Typical EMG needs 1000× amplification, not 10,000× - Diagnostic: Check raw signal amplitude (should be 100 µV - 5 mV)

If seeing 50/60 Hz powerline interference: - Check: Is ground electrode connected? → Floating ground causes massive interference - Check: Is device properly grounded? → USB isolation can help - Check: Are notch filter parameters correct? → Use 50 Hz (Europe) or 60 Hz (US) - Hardware fix: Add common-mode rejection with differential amplifier (CMRR > 80 dB)

If classifier accuracy is low: - Check: Is window size appropriate? → Too small = noisy features, too large = slow response - Check: Are features normalized? → Use StandardScaler before training - Check: Is training data representative? → Collect data from multiple sessions/users - Check: Is electrode placement consistent? → Mark positions, document in protocol - Diagnostic: Plot feature distributions per class (should be separable)

If real-time classification is slow: - Check: Is window sliding or tumbling? → Tumbling (non-overlapping) is 2-10× faster - Check: Are you computing FFT? → Time-domain only is 5-10× faster - Check: Is decision tree depth too large? → Limit to 5-8 levels for MCU - MCU optimization: Use fixed-point arithmetic instead of float

MyoWare sensor specific issues: - Sensor outputs 0-Vcc (not centered at Vcc/2) → Don’t use AC coupling - Gain is fixed at 201 → Can’t adjust in software - Bandwidth is 5-450 Hz → Already filtered, don’t bandpass again - Output impedance ~5kΩ → Use ADC with high input impedance or buffer

Electrode placement tips: - Bicep: Place on muscle belly, 2-3 cm apart, ground on elbow - Forearm: Place over extensor/flexor group, avoid tendon areas - Hand: Difficult - use multi-electrode arrays for better SNR

Section 9: EMG Signal Generation and Simulation

Let’s create more realistic synthetic EMG signals that model actual muscle activity.

Section 10: Advanced Filtering - Bandpass and Notch Filters

EMG requires carefully designed filters to remove artifacts while preserving signal content.

Section 11: Multi-Class Gesture Classification

Build a complete gesture recognition system using extracted EMG features.

Section 12: Real-Time Classification Visualization

Simulate real-time gesture detection as it would work on an embedded device.

Three-Tier Activities

Environment: local Jupyter or Colab, no hardware required.

Run the embedded notebook or open it in Colab:

  1. Generate and visualise synthetic EMG signals for rest vs contraction.
  2. Implement the processing pipeline: filtering → rectification → envelope.
  3. Compute basic features (MAV, RMS, ZC, WL) over sliding windows.
  4. Train a small classifier and inspect accuracy vs model size (float vs quantized).
  5. Answer the reflection questions in the notebook about edge constraints (what model size and latency would be acceptable on an Arduino-class device?).

In this lab we do not yet use a dedicated MCU simulator like Wokwi for EMG, but you can approximate “simulated hardware” as follows:

  • Use the notebook to replay recorded EMG data (either from your own logs or from an open dataset).
  • Implement the same moving-average and feature extraction code that will later run on Arduino, but execute it on your laptop or on a Raspberry Pi.
  • Plot:
    • Raw vs filtered vs envelope signals
    • Feature trajectories over time for different gestures

Suggested workflow:

  1. Collect or download EMG samples and save them as CSV.
  2. Load them in the notebook and run the processing pipeline.
  3. Measure approximate per-window processing time on your machine or Pi (this will guide your expectations before moving to the microcontroller).
  4. Record a short note on how window size and feature set affect latency and accuracy.

Planned improvement: In a future revision we will add a dedicated simulations/emg-simulator.qmd page with an interactive EMG waveform visualiser. For now, treat the notebook as your Level 2 “host-side simulator”.

Now move to real hardware using an Arduino (e.g., Uno or Nano 33 BLE Sense), an EMG sensor module (MyoWare/AD8232), and a small servo:

  1. Wire the EMG sensor (signal → A0, 5V/3V3 and GND as per the module documentation).
  2. Upload the EMG processing sketch from the notebook/code repository:
    • Moving-average filter
    • Threshold-based or feature-based gesture classification with hysteresis
  3. Open the Serial Plotter and verify that rest vs contraction are clearly separable.
  4. Add a servo and map gesture classes to positions (e.g., open/close grip).
  5. Note approximate behaviour:
    • Responsiveness (delay from muscle activation to motion)
    • Stability (jitter, false triggers)
    • Any power constraints (servo power supply, battery life)

If you build the more advanced multi-channel prosthetic arm described in the PDF:

  • Document your hardware (servos, battery, 3D-printed parts).
  • Log EMG and actuator behaviour and reflect on how the system might need to be redesigned for lower power or smaller MCUs.