【Prosthetic Hands】15 Third Generation - Software Design PART I
The software design of my third prosthetic hand can be divided into two parts:
1. EMG acquisition, feature extraction (Part I), and classification (Part II)
2. The decision strategies to utilize the classification results (Part III)
Today I will introduce the first part - EMG acquisition, feature extraction, and classification. In the previous generation, I attempted to use low-cost wet electrodes to detect EMG signals. Nonetheless, the wet electrodes turned out to be inconvenient and inaccurate. Eventually, I turned to the EMG sensor used in the first generation - MYO Armband. Developed by Thalmic Lab., the sensor includes eight sets of dry electrodes to acquire eight channels of EMG signals. Despite its relatively higher price, roughly 200 USD, the device is easy-to-wear, and its acquired signals are good enough for gesture recognition. In the first generation, I directly utilized the classifier developed by the company, which can accurately recognized five gestures. This time, I intended to develop the classifier myself. Combined with the investigation into feature extracting methods, I came up with a customizable classifier that enabled more classes of gestures.
Photo from Thalmic Lab., edited
The process starts with the acquisition. When the EMG signals are collected by the sensor, they will first go through the built-in filters of the device. After that, they will be moving averaged to reach a smoother results. Next, I will determine the window size. In my design, the window size is determined by the duration of each motion. To adequately separate each motion signals, I first measure the maximum possible value of each channel. Then, I used the maximum values to find the proper thresholds of each channel.
Now I can know which channels are activated during hand motions. However, the extent of activation of each channel during different motions is different, so I cannot just choose one channel and set the activation of that channel as the presence of a motion. How to solve the problem? I turn to the concept of agonist-antagonist relationship between muscle groups. Since the electrodes are evenly dispersed around the arm, I think performing a gesture must come with the activation of four channels. The mechanism turn out to be very feasible, with which I can make sure that every motions is acquired.
The original signals of a single motion are presented in eight series of data points (or a 8*N matrix, N is determined by the duration of the motion), which cannot be directly used to create a classifier. The features of these series of signals must first be extracted to create a classifier. I investigate several feature extracting method in my research, including MAV, VAR, RMS, WL, BZC, SSC, and WAMP. Each method has its way of turning a series of data into a single value. After feature extraction, eight feature values from eight channels are obtained. These values are ready to be sent to build a classifier or be classified.
In the next part, I will introduce the process of how the features extracted from the EMG signals are used to build three types of classifiers, including multidimensional SVM, KNN, and LDA/QDA.
Comments
Post a Comment