Live Pose Detection with Machine Learning

Live Pose Detection with Machine Learning

NB: Demo works best in Chrome browser.

This pose detection demo app demonstrates the capabilities of using computer vision and machine learning algorithms to detect and track the pose of a person or object in real time. The app showcases how pose detection technology can be used in various applications, such as virtual reality, gaming, and human-computer interaction.

You can interact with the app by moving in front of your camera. The app will display the estimated pose in a graphical representation in the video to the left. The grid view to the right shows the estimated keypoints as a 3D model.

The demo provides a hands-on experience for users to see the power of pose detection and understand its potential in various fields. The pose detection is entirely running locally in your browser using JavaScript.

We implemented pose detection in a digital training coach in the EUREKA/Eurostars CaRe.


What is Pose Detection?

Pose detection is a computer vision technique for estimating the pose (position and orientation) of a person or object from a single image or a video stream. It works by analyzing the visual features of the image or video and comparing them to a pre-defined model or template to identify the object and determine its pose. This process can involve detecting keypoints on the object, such as joints on a human body, and fitting them to a pre-defined model to estimate the pose. The accuracy of pose detection can be improved by using machine learning algorithms, such as deep neural networks, to learn to recognize postures from large amounts of training data.

This demo was built with Google’s MediaPipe Pose project.