CapgMyo: A High Density Surface Electromyography Database for Gesture Recognition

Abstract

High-density surface electromyography (HD-sEMG) is to record muscles' electrical activity from a restricted area of the skin by using two dimensional arrays of closely spaced electrodes. This technique allows the analysis and modelling of sEMG signals in both the temporal and spatial domains, leading to new possibilities for studying next-generation muscle-computer interfaces (MCIs). However, the absence of a standard benchmark database limits the use of HD-sEMG in real-world human-computer interactions. To address this, we present a benchmark database of HD-sEMG recordings of hand gestures performed by 23 participants, based on an 8x16 electrode array. We verified that it is possible to recognize different hand gestures with instantaneous values of sEMG signals. We thus provide a foundation for comparing gesture-recognition algorithms and developing MCIs with HD-sEMG.

The acquisition setting-up: (a) The HD-sEMG electrode array; (b) 8 HD-sEMG electrode arrays on the right forearm; (c) The HD-sEMG acquisition device ready for capture; (d) The software subsystem to present the guided hand gesture and record HD-sEMG data simultaneously.

This database is a part of our sEMG-based gesture recognition project.

Participants

We recruited 23 healthy, able-bodied subjects ranging in age from 23 to 26 years. Each subject was paid to perform a set of gestures with a non-invasive wearable acquisition device.

The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Zhejiang University, China. Written informed consent was obtained from all subjects.

Acquisition Setup

We developed a non-invasive wearable device to collect HD-sEMG data. This device consisted of 8 acquisition modules. Each acquisition module contained a matrix-type (8x2) differential electrode array, in which each electrode had a diameter of 3 mm and was arranged with an inter-electrode distance of 7.5 mm horizontally and 10.05 mm vertically. The silver wet electrodes were disposable and covered with conductive gel, with a contact impedance of less than 3 kΩ. The 8 acquisition modules were fixed around the right forearm with adhesive bands. The first acquisition module was placed on the extensor digitorum communis muscle at the height of the radio-humeral joint; others were equally spaced clockwise from the subject's perspective, forming an 8x16 electrode array. The sEMG signals were band-pass filtered at 20-380 Hz and sampled at 1,000 Hz, with a 16-bit A/C conversion. The resulting value was normalized to the [-1, 1] range, corresponding to the voltage of [-2.5 mV, 2.5 mV]. The sEMG data from the 8 acquisition modules were packed in an ARM controller and transferred to a PC via WIFI. The entire device was powered by a rechargeable lithium battery.

On the PC, our software displayed an animated 3D virtual hand driven by pre-captured data from a data glove. The subjects were asked to mimic the hand gestures shown on the screen with their right hand; this software thus captured the sEMG data and labelled each frame in terms of the gesture performed by the virtual hand.

Acquisition Protocol

Before the acquisition, subjects watched a tutorial video to familiarize themselves with the experiment. During the acquisition, subjects sat comfortably in an office chair and rested their hands on a desktop. Their skin was cleansed with rubbing alcohol prior to electrode placement. The subjects were asked to mimic the gestures performed by the virtual hand shown on the screen by using their right hands. The interval between two consecutive recording sessions for the same subject, i.e., between doffing and donning the device, was at least one week.

Our set of gestures was a subset of the NinaPro database, with the same aim of incorporating the majority of the finger movements encountered in activities of daily living, which also made it possible to compare the performance of gesture recognition by using high density and sparse multi-channel sEMG signals. Each gesture was held for 3 to 10 seconds and repeated 10 times. To avoid fatigue, the gestures were alternated with a resting posture lasting 7 seconds. Because the gestures were performed in order, repetitive, almost unconscious movements were encouraged, as in the NinaPro database. For each recording session, two additional max-force gestures were each performed once to estimate the maximal voluntary contraction (MVC) force level.

The CapgMyo database was divided into three sub-databases (denoted as DB-a, DB-b and DB-c) in terms of the acquisition procedure. DB-a contains 8 isometric and isotonic hand gestures obtained from 18 of the 23 subjects. The gestures in DB-a correspond to Nos. 13-20 in the NinaPro database. Each gesture in DB-a was held for 3 to 10 seconds. DB-b contains the same gesture set as in DB-a but was obtained from 10 of the 23 subjects. Every subject in DB-b contributed two recording sessions on different days, with an inter-recording interval greater than one week. As a result, the electrodes of the array were attached at slightly different positions each time. DB-c contains 12 basic movements of the fingers obtained from 10 of the 23 subjects. The gestures in DB-c correspond to Nos. 1-12 in the NinaPro database. Each gesture in DB-b and DB-c was held for approximately 3 seconds. To ensure lower skin impedance in DB-b and DB-c, the skin was abraded with soft sandpaper before being cleansed with alcohol.

Whereas DB-a was intended to fine-tune hyper-parameters of the recognition model, DB-b and DB-c were intended to improve intra-subject or inter-subject evaluation. Cross-session recognition of hand gestures on the basis of sEMG typically suffers as a result of the shifting of the electrodes. DB-b allows for the evaluation of methods to address this problem.

Gestures in CapgMyo

(a) Twelve basic movements of the fingers; (b) 8 isometric and isotonic hand configurations; (c) gestures performed to estimate the maximal voluntary contraction (MVC) force. The instances are the screenshots of the guiding virtual hand.

Label Description Instance Label Description Instance
1 Thumb up 5 Abduction of all fingers
2 Extension of index and middle, flexion of the others 6 Fingers flexed together in fist
3 Flexion of ring and little finger, extension of the others 7 Pointing index
4 Thumb opposing base of little finger 8 Adduction of extended fingers
Label Description Instance Label Description Instance
1 Index flexion 7 Little finger flexion
2 Index extension 8 Little finger extension
3 Middle flexion 9 Thumb adduction
4 Middle extension 10 Thumb abduction
5 Ring flexion 11 Thumb flexion
6 Ring extension 12 Thumb extension
Label Description Instance Label Description Instance
100 Abduction of all fingers 101 Fingers flexed together in fist
Subject ID in each sub-database

Some subjects participated in the acquisition of more than one sub-database. Each subject in DB-b took part in two recording sessions, which are marked with different IDs.

Subject ID ID in DB-a ID in DB-b ID in DB-c
Session 1 Session 2
1 - - - 1
2 1 1 2 2
3 2 3 4 -
4 - 5 6 3
5 3 - - -
6 4 7 8 4
7 5 9 10 -
8 - 11 12 -
9 6 13 14 -
10 7 - - -
11 8 - - -
12 - 15 16 5
13 9 - - 6
14 - - - 7
15 10 - - -
16 11 - - 8
17 12 17 18 -
18 13 - - -
19 14 - - -
20 15 - - -
21 16 - - 9
22 17 - - -
23 18 19 20 10

Preprocessing

Power-line interference was removed from the sEMG signals by using a band-stop filter (45-55 Hz, second-order Butterworth). The label of each frame was assigned on the basis of the gesture performed by the guiding virtual hand in our acquisition software. Thus, the resulting gestures performed by the subjects may not perfectly match the label as a result of human reaction times. In this study, only the static part of the movement was used to evaluate the recognition algorithms. In other words, for each trial, the middle one-second window, i.e. 1,000 frames of data. The raw data are also available in the online repository.

Data Records

The data records are in Matlab format. Each sub-database contains sss_ggg.mat for the raw data and sss_ggg_ttt.mat for the preprocessed data, where sss is the subject ID, ggg is the gesture ID, and ttt is the trial ID. For example, 004_001.mat contains the data (including the rest posture) from subject 4 performing gesture 1, and 004_001_003.mat contains the preprocessed 3rd trial.

Variables in the data records
Name Type Description
data nx128 matrix SEMG signals, where n is the number of frames.
gesture nx1 matrix The gesture ID of each frame, where 0 denotes the rest posture.
subject Scalar The subject ID.
Name Type Description
data 1000x128 matrix SEMG signals.
gesture Scalar The gesture ID.
subject Scalar The subject ID.
trial Scalar The trial ID.
Download

The data records are grouped by subject ID in each sub-database.

Preprocessed
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018
Raw
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018
Preprocessed
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020
Raw
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020
Preprocessed
001 002 003 004 005 006 007 008 009 010
Raw
001 002 003 004 005 006 007 008 009 010

BibTex

@article{Du_Sensors_2017,
    title={{Surface EMG-based inter-session gesture recognition enhanced by deep domain adaptation}},
    author={Du, Yu and Jin, Wenguang and Wei, Wentao and Hu, Yu and Geng, Weidong},
    journal={Sensors},
    volume={17},
    number={3},
    pages={458},
    year={2017},
    publisher={Multidisciplinary Digital Publishing Institute}
}

Acknowledgements

This work was supported by a grant from the National Natural Science Foundation of China (No. 61379067) and the National Key Research and Development Program of China (No. 2016YFB1001300).