Points of departure

Community of learners

As researchers, we often communicate the results of studies and describe methods, but we don't always tailor our communication to teach or guide readers on how to use or reproduce our methods. This platform aims to promote literacy and use of each other's methods, and everyone is welcome to contribute.

Didactical

Each contributed module introduces code with instructions on how to use it in practice. The modules are designed to introduce learners to new concepts and routines, rather than just sharing code with other experts.

Self-ownership

Each module contributor provides the appropriate citation for their module, so they are properly acknowledged when they have been helpful to others in the community.

Build to grow

We envision this platform expanding in scope to host lectures on general theoretical and methodological frameworks, as well as a well-curated update bulletin featuring new papers and tools of interest to the community. If you have ideas and time to help out, please reach out.

Modules

Gesture networks and DTW (Python)

This module demonstrates how to implement gesture networks and gesture spaces using dynamic time warping.
By Wim Pouw

Demo for OpenPose with 3D tracking with Pose2Sim (Python)

This module provides a python pipeline for openpose tracking and 3D triangulation with Pose2Sim.
By Šárka Kadavá & Wim Pouw

Dynamic visualization dashboard (Python)

This module provides an example of a dynamic dashboard that displays audio-visual and static data.
By Wim Pouw

Full-body tracking (+masking) (Python)

This module shows how to track the face, hands, and body using MediaPipe, with the option of masking the individual in the video.
By Wim Pouw

Head rotation tracking by adapting mediapipe (Python)

This module shows a way to track head directions, next the face, hands, and body tracking using MediaPipe.
By Wim Pouw

Running OpenPose in batches (Batch script)

This module demonstrates how to use batch scripting to run OpenPose on a set of videos.
By James Trujillo & Wim Pouw

Recording from multiple cameras synchronously while also streaming to LSL

This module demonstrates how to record from multiple cameras synchronously, which is very helpful for creating your own 3D motion tracking pipeline.
By Šárka Kadavá & Wim Pouw

3D tracking from 2D videos using anipose and deeplabcut (Python)

This module shows how to set up a 3D motion tracking system with multiple 2D cameras, using anipose and human pose tracking with DeepLabCut.
By Wim Pouw

Aligning and pre-processing multiple data streams (R)

This module provides an overview of how to wrangle multiple data streams (motion tracking, acoustics, annotations) and preprocess them (smoothing) to create a single long time series dataset ready for further processing.
By Wim Pouw

Aligning and pre-processing multiple data streams (Python)

In this module an overview is provided how to wrangle multiple data streams (motion tracking, acoustics, annotations) and preprocess them (smoothing) so that you end up with one long timeseries dataset ready for further processing.
By Wim Pouw

Extracting a smoothed amplitude envelope from sound (R)

This module demonstrates how to extract a smoothed amplitude envelope from a sound file.
By Wim Pouw

Motion tracking analysis: Kinematic feature extraction (Python)

This module provides an example of how to analyze motion tracking data using kinematic feature extraction.
By James Trujillo

Feature extraction for machine classification & practice dataset SAGA (R)

This module introduces a practice dataset and provides R code for setting up a kinematic and speech acoustic feature dataset that can be used to train a machine classifier for gesture types.
By Wim Pouw

Turn-Taking Dynamics and Entropy (Python)

This module introduces calculating turn-taking measures, such as gaps and overlaps, as well as entropy, from conversations with 2 or more speakers.
By James Trujillo

Cross-Wavelet Analysis of Speech-Gesture Synchrony (R))

This module introduces the use of Cross-Wavelet analysis as a way to measure temporal synchrony of speech and gesture (or other visual signals).
By James Trujillo