Points of departure

Community of learners

As researchers, we often communicate the results of studies and describe methods, but we don't always tailor our communication to teach or guide readers on how to use or reproduce our methods. This platform aims to promote literacy and use of each other's methods, and everyone is welcome to contribute.

Didactical

Each contributed module introduces code with instructions on how to use it in practice. The modules are designed to introduce learners to new concepts and routines, rather than just sharing code with other experts.

Self-ownership

Each module contributor provides the appropriate citation for their module, so they are properly acknowledged when they have been helpful to others in the community.

Build to grow

We envision this platform expanding in scope to host lectures on general theoretical and methodological frameworks, as well as a well-curated update bulletin featuring new papers and tools of interest to the community. If you have ideas and time to help out, please reach out.

Modules quick search

Modules

Using PyPi package envisionHGdetector for automatic hand gesture annotation (Python)

This module provides a quick demo of our new (and still experimental) PyPi package envisionHGdetector, that allows you to automatically annotate hand gesture stroke events using a convolutional neural network trained on SAGA, TEDM3D, and Zhubo dataset.
by Wim Pouw

Quantifying Interpersonal Synchrony (Python)

This module provides an introduction to calculating interpersonal movement synchrony, including time-lag assessment and pseudo-pair calculation.
by James Trujillo

Multi-person tracking with YOLO and computing social proximity (Python)

This module uses the very reliable YOLO ultralytics pose tracking for multiple persons for top view or other perspectives, and shows an simple calculation of interpersonal distance between two persons.
By Wim Pouw, Arkadiusz Białek, and James Trujillo

Visual Communication (ViCOM) tutorial with exercises: A complete kinematic feature analysis pipeline (Python)

This module contains a kinematic feature extraction pipeline with excercises for students of communicative motion analysis.
By Wim Pouw

Behavioral Classification Using Convolutional Neural Networks (Python)

This module takes you through training a model to automatically annotate bodily gestures.
By Wim Pouw

Decision Tree-Based Classification Algorithms (R)

This module takes you through using decision trees to make sense of high-dimensional data.
By Alexander Kilpatrick

Multimodal annotation distances (Python and R)

This module takes in annotations in ELAN and allows to compare the overlap between them using the multimodal-annotation-distance tool.
By Camila Antônio Barros

Creating video-embedded time series animations (Python)

This module takes in a video, and then creates movement-sound time series animations embedded in the video.
By Wim Pouw

Turn-Taking Dynamics and Entropy (Python)

This module introduces calculating turn-taking measures, such as gaps and overlaps, as well as entropy, from conversations with 2 or more speakers.
By James Trujillo

Gesture networks and DTW (Python)

This module demonstrates how to implement gesture networks and gesture spaces using dynamic time warping.
By Wim Pouw

Demo for OpenPose with 3D tracking with Pose2Sim (Python)

This module provides a python pipeline for openpose tracking and 3D triangulation with Pose2Sim.
By Šárka Kadavá & Wim Pouw

Dynamic visualization dashboard (Python)

This module provides an example of a dynamic dashboard that displays audio-visual and static data.
By Wim Pouw

Full-body tracking (+masking) (Python)

This module shows how to track the face, hands, and body using MediaPipe, with the option of masking the individual in the video.
By Wim Pouw

Head rotation tracking by adapting mediapipe (Python)

This module shows a way to track head directions, next the face, hands, and body tracking using MediaPipe.
By Wim Pouw

Running OpenPose in batches (Batch script)

This module demonstrates how to use batch scripting to run OpenPose on a set of videos.
By James Trujillo & Wim Pouw

Recording from multiple cameras synchronously while also streaming to LSL

This module demonstrates how to record from multiple cameras synchronously, which is very helpful for creating your own 3D motion tracking pipeline.
By Šárka Kadavá & Wim Pouw

3D tracking from 2D videos using anipose and deeplabcut (Python)

This module shows how to set up a 3D motion tracking system with multiple 2D cameras, using anipose and human pose tracking with DeepLabCut.
By Wim Pouw

Aligning and pre-processing multiple data streams (R)

This module provides an overview of how to wrangle multiple data streams (motion tracking, acoustics, annotations) and preprocess them (smoothing) to create a single long time series dataset ready for further processing.
By Wim Pouw

Aligning and pre-processing multiple data streams (Python)

In this module an overview is provided how to wrangle multiple data streams (motion tracking, acoustics, annotations) and preprocess them (smoothing) so that you end up with one long timeseries dataset ready for further processing.
By Wim Pouw

Extracting a smoothed amplitude envelope from sound (R)

This module demonstrates how to extract a smoothed amplitude envelope from a sound file.
By Wim Pouw

Motion tracking analysis: Kinematic feature extraction (Python)

This module provides an example of how to analyze motion tracking data using kinematic feature extraction.
By James Trujillo

Feature extraction for machine classification & practice dataset SAGA (R)

This module introduces a practice dataset and provides R code for setting up a kinematic and speech acoustic feature dataset that can be used to train a machine classifier for gesture types.
By Wim Pouw

Cross-Wavelet Analysis of Speech-Gesture Synchrony (R)

This module introduces the use of Cross-Wavelet analysis as a way to measure temporal synchrony of speech and gesture (or other visual signals).
By James Trujillo