【Prosthetic Hands】16 Third Generation - Software Design PART II

 After successfully extracting features from the original signals, I implement three types of classifiers in MATLAB. 

1. Multi-class Support Vector Machine
Support Vector Machine (SVM) aims to find a hyperplane that creates the largest margin between the data points of different classes. With the help of support vectors, which are defined as the data points closest to the hyperplane, SVM can create the margin to separate data points. In this project, several kernel functions are applied to find the classifiers with better performance, including linear, quadratic, cubic, and Gaussian kernel functions. SVM can separate data points into binary classes, but it does not support multi-class classification. There are two methods to decompose the multi-class classification, the one-to-one method and one-to-rest method. In MATLB program, the one-to-one method is applied to enable multi-class classification.

2. K-nearest Neighbors
K-nearest Neighbors (KNN) is one of the simplest classifiers. KNN predicts the data point according to the labels of the k closest neighbors of the unknown point. An adequately selected “k” value can prevent overfitting and underfitting. I include three k values to find the optimal one. Three ways to determine the distance are discussed, including Euclidean, cubic, and cosine distance. Finally, weighted KNN is also included in this research, which takes not only the labels of the closest training data, but also the how close the training data are into account.

3. Linear / Quadratic Discriminant Analysis
Linear Discriminant Analysis projects the data points from a high dimensional space to a lower one. The goal of LDA is to ensure that the projected training data can have the minimum intra-class distance and the maximum inter-class distance. LDA assumes that the distributions of the data points are Gaussian, so it performs poorly when dealing with non-Gaussian distributing data. QDA can generate quadratic margins, so it is more flexible in some classification problem. For data points having similar variance, LDA is often good enough to classify the data points. When the variance of the different classes is larger, using QDA can reach a better result.

I acquired 4,000 EMG signals from eight motion types, including Power Grasping, Wrist Extension, Lateral Pinch, Flattened Hand, Centralized Grip, Hook Grasping, Three-jaw Chuck, and Flexion (each for 500 times). Half of them are used to generate the classifier with 5 folds of cross validation to prevent overfitting; the other half are treated as the data to be predicted. I use F-score to evaluate the effectiveness of each classifiers, and the results are shown in the following table.


The combination of Medium Gaussian SVM and RMS reaches the best result, roughly 96.2%. Though the results seems promising, some aspects of practical usage are neglected. The experiment only includes one subject's EMG data, so it is uncertain that the result will be as good as what I obtain here. Also, EMG signals are easily affected by the condition of the skin, the fatigue of the forearm muscles, and the slight difference of signal distribution when performing similar motions.

For the hand to be truly applicable in daily life, I need to make sure that there's a system that can optimally utilize the classification results and hopefully, prevent dangers on users. This idea is eventually developed into the practical decision strategy, the primary topic of the next article. 



Comments