Pose-Detection Demos in Web
Author : Primo
This package involves some demos I collected that using state-of-the-art models for running real-time pose detection
in your browser.
Hint: Since most of models in the package are trained by Google,make sure that you are able to bypass the GFW.
Before we start, let’s give a brief introduction to the principle of pose-detection technology in web browsers.
Above all, every ingenious effect achievement in this package is realized by machine learning.In brief, machine learning is a method that you make the computer understand something by teaching it something.You tell the computer that ‘A’ is ‘A’ and “B” isn’t ‘A’ many many many times by delivering countless data to the computer,and we call it ‘Dataset’.Then the computer will learn the Dataset via some algorithms.After the learning process, if you give the computer some information,it will classify them.And then it can tell you “A” is “A’ with a great probability.
These days, we have many platforms to create machine learning models.Tensorflow(Google) and Pytorch(Facebook/Meta) are popular ones among them.Both of them require Python skills to create and apply models.To Digital Art Designers/Game Developers/Software Engineers(especially Front-End Engineers) who do not have enough mathematic and Python-Programming skills,nevertheless,those platforms have steep learning curve.
To make machine-learning skills easier for developers,Google released a Javascript library Tensorflow.js.It makes it possible for us to create machine learning models in Javascript or in another word,in Web.Meanwhile,Node.JS is also available.Since web is so convenient that we can use it on almost every console.And we can ‘Code once,run everywhere’.
To simplify the process further,we got ML5.JS.It is derived from tensorflow.js and relies heavily on P5.JS which is another JS library (processing in the web).So you can consider that both ML5.JS and P5.JS are designed for we digital-art students or someone(newbie) who is interested in the application of AI.
01-Posenet in P5.js
Tensorflow.js provides 3 kinds of pose-detection models:MoveNet, Blazepose and Posenet.They’re trained by different research teams in Google.
In ML5.js, we use the Posenet model.
Posenet is trained by the dataset “COCO”((Microsoft Common Objects in Context)).It is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.
As a result ,in Posenet ,we got a keypoint diagram called COCO Keypoints,and so does it in Movenet.
The data will be reserved in an array, and their index are:
1 | 0: nose |
In ML5.JS,once we detect a “pose” from the image or video, we got an object.First, it has a score stands for the confidence score.Then we got an array called “keypoints”.
The 17 points are reserved in the order above in this array,and every point has a score,’part’ and a position coordinate.For the coordinate,x and y represent the actual keypoint position in the image,which is different from that in Blazepose.Besides the array,each part is stored in the object,so if you want to know the x index of “nose”,just use let x=pose.nose.x.
1 | { |
02-Handpose in P5.js
Just like Posenet ,we will get an object from the server after detecting the hand.
Firstly,the model has an array which stores the coordinate of 21 keypoints of hand above in order.
Pay attention that every coordinate has three values [x,y,z]. [x,y] stands for the location of the point in the 2-D plane, and z is the number of z-index means you can use it in the 3-D space.
Besides the array we have an object in the object called ‘annotations’.It stores every finger’s information.
So if you want query the information of ‘thumb’,let thumb=object.annotations.thumb
1 | { |
03-Handpose+Posenet in P5.js
Now we can combine the two models together.So we can get both pose data and one hand’s data.
With this data, we can make some simple classifications.
The demo written by me can recognize which hand is detecting,by comparing the distance of palm(Handpose) and Wrist(Posenet).
And, we can make use of the coordinate of every finger to judge if it is bent.If it is bent,the circle of this finger in the image will turn red.
We are able to recognize some gestures like ‘1,2,3,4,5,6’ via counting which finger is bent,without machine learning algorithms.
04-train_own_model
Despite computing coordinates by their positional relationship,you can also use machine learning skills to classify a gesture.
In ml5.js,it’s not very difficult for us to train our own models.
I’ve written a demo in 04-train-own_model.
The model can train 4 different gestures.If you want to train more models,find all the ‘options’ in all javascript documents and change the ‘output’ number to whatever you want.
First,open the ‘collect’ document.Press any button on your keyboard except ‘s’.And then you will see the ‘waiting’ on the screen turns to ‘ready’.You need to get ready to show your gestures in front your webcams.Then ‘ready’ will turn to ‘collecting’.
The program starts to collect your body’s data.
Then the word will trun to ‘waiting’ again,you are able to press another key.After you’ve pressed 4 keys (or the number you want),press ‘S’ and you will get a JSON document that stores the data.
Rename the JSON file and drag it to the file ‘train’.
Change your root to ‘train’ , open the ‘index.js’.In line 12,change the json file’s name to what you’ve just renamed it.And then run the HTML file.The model will be training.
Wait a minute,and you will get three files.
Change your root to “classify” folder.And put those files in the folder “model”.
Run the HTML file, and make some poses.The computer is able to classify the pose by models your trained!
05-Blazepose_P5.js
Blazepose is another model released by a Google research team Mediapipe.You can use it in both tensorflow.js or mediapipe environment.
Blazepose is the Superset of COCO,BlazeFace and BlazePalm.
MediaPipe BlazePose can detect 33 keypoints, in addition to the 17 COCO keypoints, it provides additional keypoints for face, hands and feet.
Each of the object of the 33-long array contains four variables.
[x,y,z] stands for real-world 3D coordinates in meters with the origin at the center between hips.Unlike Posenet’s Data,the coordinate numbers
in Blazepose have been normalized.So it’s always between [0,1].
Once you want to transfer it on your screen,you have to multiply it to the height and width of the canvas.And another variables which name ‘visibility’
stands for the confidence score.
Tough Blazepose can detect only one person,it’s much more steady than Posenet from my point of view.
06-Blazepose
This is a demo show how it works in the 3-D space.It uses HTML5-Canvas without P5.js.You can attach it to more complicated frameworks like Three.js and Babylon.js