How the Dataset is structured.
The StressID Dataset contains:
- physiological recordings for 65 subjects,
- video recordings for 54 participants, and
- audio for 56 participants.
Following data collection, we split each recorded session into individual tasks: one 3 minutes breathing recording (block 1), 2 recordings corresponding to the watching of the video-clips (block 2) of respectively 2 and 3 minutes, 7 separate 1 minute recordings of the interactive tasks (block 3), and a 5 minutes relaxation recording (block 4).
After splitting, StressID consists of approximately 1119 minutes of annotated physiological signals recordings, 918 minutes of video recordings, and 385 minutes of audio.
Dataset composition.
Task/Stressor | Count physiological (min) | Count video (min) | Count audio (min) |
Breathing | 65 (195) | 52 (156) | 0 (0) |
Video 1 | 64 (185) | 52 (150) | 0 (0) |
Video 2 | 64 (126) | 53 (104) | 0 (0) |
Counting 1 | 65 (65) | 54 (54) | 55 (55) |
Counting 2 | 65 (65) | 54 (54) | 55 (55) |
Stroop | 65 (65) | 54 (54) | 55 (55) |
Speaking | 65 (65) | 54 (54) | 55 (55) |
Math | 65 (65) | 54 (54) | 55 (55) |
Reading | 65 (65) | 54 (54) | 55 (55) |
Counting 3 | 65 (65) | 54 (54) | 55 (55) |
Relax | 63 (158) | 52 (130) | 0 (0) |
Total | 711 (1119) | 587 (918) | 385 (385) |
More technical information can be found here.