A multi-stream convolutional neural network for sEMG-based gesture recognition in muscle-computer interface

Abstract

In muscle-computer interface (MCI), deep learning is a promising technology to build-up classifiers for recognizing gestures from surface electromyography (sEMG) signals. Motivated by the observation that a small group of muscles play significant roles in specific hand movements, we propose a multi-stream con- volutional neural network (CNN) framework to improve the recognition accuracy of gestures by learning the correlation between individual muscles and specific gestures with a “divide-and-conquer”strategy. Its pipeline consists of two stages, namely the multi-stream decomposition stage and the fusion stage. During the multi-stream decomposition stage, it first decomposes the original sEMG image into equal- sized patches (streams) by the layout of electrodes on muscles, and for each stream, it independently learns representative features by a CNN. Then during the fusion stage, it fuses the features learned from all streams into a unified feature map, which is subsequently fed into a fusion network to recognize gestures. Evaluations on three benchmark sEMG databases showed that our proposed multi-stream CNN framework outperformed the state-of-the-arts on sEMG-based gesture recognition.

Conceptual diagram of our proposed multi-stream divide-and-conquer framework. The input of the framework are sEMG signals recorded by C channels within a L -frame time window ( L = 1 for HD-sEMG). The multi-stream CNN and the fusion network are in gray dashed boxes. Conv, LC and FC denote convolution layer, locallyconnected layer and fully-connected layer, respectively. The number after the layer name denotes the number of filters, and the numbers after @ denote convolution kernel size.

BibTex

@article{WEI2019131,
    title = "A multi-stream convolutional neural network for sEMG-based gesture recognition in muscle-computer interface",
    journal = "Pattern Recognition Letters",
    volume = "119",
    pages = "131 - 138",
    year = "2019",
    note = "Deep Learning for Pattern Recognition",
    issn = "0167-8655",
    doi = "https://doi.org/10.1016/j.patrec.2017.12.005",
    url = "http://www.sciencedirect.com/science/article/pii/S0167865517304439",
    author = "Wentao Wei and Yongkang Wong and Yu Du and Yu Hu and Mohan Kankanhalli and Weidong Geng",
    keywords = "Surface electromyography, Muscle-computer interface, Gesture recognition, Deep learning, Convolutional neural network",
    abstract = "In muscle-computer interface (MCI), deep learning is a promising technology to build-up classifiers for recognizing gestures from surface electromyography (sEMG) signals. Motivated by the observation that a small group of muscles play significant roles in specific hand movements, we propose a multi-stream convolutional neural network (CNN) framework to improve the recognition accuracy of gestures by learning the correlation between individual muscles and specific gestures with a “divide-and-conquer” strategy. Its pipeline consists of two stages, namely the multi-stream decomposition stage and the fusion stage. During the multi-stream decomposition stage, it first decomposes the original sEMG image into equal-sized patches (streams) by the layout of electrodes on muscles, and for each stream, it independently learns representative features by a CNN. Then during the fusion stage, it fuses the features learned from all streams into a unified feature map, which is subsequently fed into a fusion network to recognize gestures. Evaluations on three benchmark sEMG databases showed that our proposed multi-stream CNN framework outperformed the state-of-the-arts on sEMG-based gesture recognition."
    }

Acknowledgements

This work was supported by grants from the National Key Re- search and Development Program of China (No. 2016YFB1001302), the National Natural Science Foundation of China (No. 6137906 7), and the National Research Foundation , Prime Ministers Office, Sin- gapore under its International Research Centre in Singapore Fund- ing Initiative.