Points of departure

Community of learners

As researchers we are often communicating results of studies and describing our methods. We do not always tune our communication so as to teach or guide readers to use or reproduce the methods we use. This platform is to promote literacy and use of each others methods. Everyone can contribute.

Didactical

Each contributed module is designed to introduce code with instructions how to use code in practice. The modules are designed to introduce learners to new concepts and routines as opposed to sharing code with other experts.

Self-ownership

Each contributor to a module also provides the appropriate citation for their module. In this way the contributor is properly acknowledged when they have been helpful to others in the community.

Build to grow

We can imagine that this platform can expand in scope. It may host lectures focused on introducing general theoretical and methodological frameworks. It may involve a well-curated update bulletin figuring new papers and tools that might be of interest to the community. If you have ideas and time to help out, please reach out.

Modules

Gesture networks and DTW (Python)

This module shows an implementation of gesture networks and gesture spaces, utilizing dynamic time warping.

Dynamic visualization dashboard (Python)

This module provides an example of a dynamic dashboard with audio-visual and static data.

Full-body tracking (+masking) (Python)

In this module an overview is provided how to track facial, hands, and body using Mediapipe, with the optionally of masking the individual in the video.

Running OpenPose in batches (Batch script)

This module demonstrates running openpose on a set of videos using batch scripting.

3D tracking from 2D videos using anipose and deeplabcut (Python)

This module demonstrates how to set up a 3D motion tracking system with multiple 2D cameras, running anipose and human pose tracking with deeplabcut.

Aligning and pre-processing multiple data streams (R)

In this module an overview is provided how to wrangle multiple data streams (motion tracking, acoustics, annotations) and preprocess them (smoothing) so that you end up with one long timeseries dataset ready for further processing.

Aligning and pre-processing multiple data streams (Python)

In this module an overview is provided how to wrangle multiple data streams (motion tracking, acoustics, annotations) and preprocess them (smoothing) so that you end up with one long timeseries dataset ready for further processing.

Extracting a smoothed amplitude envelope from sound (R)

This module provides an example of extracting a smoothed amplitude envelope from a sound file.

Motion tracking analysis: Kinematic feature extraction (Python)

This module provides an example of how to analyze motion tracking data using kinematic feature extraction.

Feature extraction for machine classification & practice dataset SAGA (R)

This module introduces a practice dataset and provides code in R for setting up a kinematic and speech acoustic feature dataset that can be used to train a machine classifier for gesture types