Gesture networks and DTW (Python)
This module shows an implementation of gesture networks and gesture spaces, utilizing dynamic time warping.
Dynamic visualization dashboard (Python)
This module provides an example of a dynamic dashboard with audio-visual and static data.
Full-body tracking (+masking) (Python)
In this module an overview is provided how to track facial, hands, and body using Mediapipe, with the optionally of masking the individual in the video.
Running OpenPose in batches (Batch script)
This module demonstrates running openpose on a set of videos using batch scripting.
3D tracking from 2D videos using anipose and deeplabcut (Python)
This module demonstrates how to set up a 3D motion tracking system with multiple 2D cameras, running anipose and human pose tracking with deeplabcut.
Aligning and pre-processing multiple data streams (R)
In this module an overview is provided how to wrangle multiple data streams (motion tracking, acoustics, annotations) and preprocess them (smoothing) so that you end up with one long timeseries dataset ready for further processing.
Aligning and pre-processing multiple data streams (Python)
In this module an overview is provided how to wrangle multiple data streams (motion tracking, acoustics, annotations) and preprocess them (smoothing) so that you end up with one long timeseries dataset ready for further processing.
Extracting a smoothed amplitude envelope from sound (R)
This module provides an example of extracting a smoothed amplitude envelope from a sound file.
Motion tracking analysis: Kinematic feature extraction (Python)
This module provides an example of how to analyze motion tracking data using kinematic feature extraction.
Feature extraction for machine classification & practice dataset SAGA (R)
This module introduces a practice dataset and provides code in R for setting up a kinematic and speech acoustic feature dataset that can be used to train a machine classifier for gesture types