Surface Electromyography-based Gesture Recognition by Multi-view Deep Learning

Abstract

    Gesture recognition using sparse multichannel Surface Electromyography (sEMG) is a challenging problem, and the solutions are far from optimal from the point of view of Muscle- Computer Interface (MCI). In this work, we address this problem from the context of multi-view deep learning. A novel multi-view Convolutional Neural Network (CNN) framework is proposed by combining classical sEMG feature sets with a CNN-based deep learning model. The framework consists of two parts. In the first part, multi-view representations of sEMG are modeled in parallel by a multistream CNN, and a performance-based view construction strategy is proposed to choose the most discriminative views from classical feature sets for sEMG-based gesture recognition. In the second part, the learned multi-view deep features are fused through a view aggregation network composed of early and late fusion subnetworks, taking advantage of both early and late fusion of learned multi-view deep features. Evaluations on 11 sparse multichannel sEMG databases as well as 5 databases with both sEMG and Inertial Measurement Unit(IMU) data demonstrate that our multi-view framework outperforms singleview methods on both unimodal and multimodal sEMG data streams.

Illustration of the proposed multi-view deep learning framework for sEMG-based gesture recognition. Conv, LC, FC and BN respectively denote the convolution layer, locally-connected layer, fully-connected layer and batch normalization, respectively. The number following the layer name denotes the number of filters, and the numbers after the ampersand(@) denote the convolution kernel size.

BibTex

@article{wei2019surface,
    title={Surface-Electromyography-Based Gesture Recognition by Multi-View Deep Learning},
    author={Wei, Wentao and Dai, Qingfeng and Wong, Yongkang and Hu, Yu and Kankanhalli, Mohan and Geng, Weidong},
    journal={IEEE Transactions on Biomedical Engineering},
    volume={66},
    number={10},
    pages={2964--2973},
    year={2019},
    publisher={IEEE}
}

Acknowledgements

This work was supported by grants from the National Key Research and Development Program of China (No. 2016YFB1001302), the National Natural Science Foundation of China (No. 61379067), and the National Research Foundation, Prime Ministers Office, Singapore under its International Research Centre in Singapore Funding Initiative.