Home > Web Front-end > JS Tutorial > Face Detection on the Web with Face-api.js

Face Detection on the Web with Face-api.js

Christopher Nolan
Release: 2025-02-09 09:46:12
Original
388 people have browsed it

Face Detection on the Web with Face-api.js

Web browsers are becoming increasingly powerful, and the complexity of websites and web applications is also increasing. What required a supercomputer to do decades ago is now possible with just one smartphone. Face detection is one of them.

Face detection and analysis capabilities are very useful because it allows us to add clever features. Think of automatically blurring faces (like Google Maps), panning and zooming webcam images to focus on people (like Microsoft team), verifying passports, adding fun filters (like Instagram and Snapchat), and more Multifunctional. But before we can do all this, we need to find the face first!

face-api.js is a library that enables developers to use face detection in their applications without the background in machine learning.

The code for this tutorial can be found on GitHub.

Key Points

  • face-api.js is a library that enables developers to use face detection in their web applications without the background in machine learning. It can detect faces, estimate various features in the face, and even identify who the person in the photo is.
  • This library uses TensorFlow.js, a popular JavaScript machine learning library for creating, training, and using neural networks in a browser. However, it encapsulates all of this in an intuitive API, making it easy to use.
  • face-api.js is perfect for amateur projects, experiments, and even MVPs. However, it can affect performance and developers may need to choose between bandwidth and performance or accuracy.
  • Although face-api.js is a powerful tool, it should be noted that artificial intelligence is good at amplifying bias. Therefore, developers should use this technology with caution and use a diverse test group for thorough testing.

Face detection using machine learning

Detection of objects (such as human faces) is quite complicated. Think about it: Maybe we could write a program that scans pixels to find eyes, nose, and mouth. This can be done, but it is actually impossible to make it completely reliable because there are many factors to consider. Think about lighting conditions, facial hair, huge differences in shape and color, makeup, angles, masks, and more.

However, neural networks are good at dealing with such problems and can generalize to suit most, if not all, conditions. We can create, train and use neural networks in a browser using TensorFlow.js (a popular JavaScript machine learning library). However, even if we use an off-the-shelf pretrained model, we still need to dig deep into how to provide information to TensorFlow and how the output is interpreted. If you are interested in the technical details of machine learning, check out "Beginners for Machine Learning with Python".

face-api.js came into being. It encapsulates all of this in an intuitive API. We can pass img, canvas, or video DOM elements, and the library will return one or a set of results. face-api.js can detect faces and estimate various features in faces, as shown below.

  • Face detection: Obtain one or more face boundaries. This is very useful for determining the position and size of a face in the picture.
  • Face feature point detection: Obtain the position and shape of eyebrows, eyes, nose, mouth and lips, and chin. This can be used to determine the face orientation or to project the figure to a specific area, such as the beard between the nose and the lips.
  • Face recognition: Determine who the person in the photo is.
  • Face expression detection: Get expressions from faces. Please note that the results may vary from culture to culture.
  • Age and Gender Testing: Get age and gender from faces. The “Gender” classification classifies faces as female or male, which does not necessarily reveal their gender.

Before you use this content outside of experiments, please note that AI is good at amplifying biases. Gender classification works well for cisgender people, but it cannot detect the gender of my nonbinary friend. It recognizes white people most of the time, but often fails to detect people of color.

Be very cautious when using this technology and use a diverse test team for thorough testing.

Installation

We can install face-api.js through npm:

<code>npm install face-api.js
</code>
Copy after login
Copy after login
Copy after login

However, to skip setting up the build tool, I will include the UMD package via unpkg.org:

<code>/* globals faceapi */
import 'https://unpkg.com/face-api.js@0.22.2/dist/face-api.min.js';
</code>
Copy after login
Copy after login
Copy after login
After that, we need to download the correct pretrained model from the repository of the library. Determine what we want to know from the face and use the Available Models section to determine which models are needed. Some features can use multiple models. In this case, we have to choose between bandwidth/performance and accuracy. Compare the file sizes of the various available models and select the model you think is best for your project.

Unsure what models you need for use? You can return to this step later. When we use the API without loading the required model, an error is thrown, indicating the model the library expects.

Face Detection on the Web with Face-api.js

We can now use the face-api.js API.

Example

Let's build something!

For the following example, I will use this function to load a random image from Unsplash Source:

<code>function loadRandomImage() {
  const image = new Image();

  image.crossOrigin = true;

  return new Promise((resolve, reject) => {
    image.addEventListener('error', (error) => reject(error));
    image.addEventListener('load', () => resolve(image));
    image.src = 'https://source.unsplash.com/512x512/?face,friends';
  });
}
</code>
Copy after login
Copy after login
Cropped pictures

You can find the code for this demo in the included GitHub repository.

First, we have to select and load the model. To crop an image, we only need to know the bounding box of the face, so face detection is enough. We can do this with two models: the SSD Mobilenet v1 model (slightly below 6MB) and the Tiny Face Detector model (slow 200KB). Assuming accuracy is irrelevant, as the user can also manually crop it. Also, let's assume that the visitor uses this feature on a slow internet connection. Since our focus is on bandwidth and performance, we will choose the smaller Tiny Face Detector model.

After downloading the model, we can load it:

<code>npm install face-api.js
</code>
Copy after login
Copy after login
Copy after login

We can now load the image and pass it to face-api.js. faceapi.detectAllFaces uses the SSD Mobilenet v1 model by default, so we have to explicitly pass new faceapi.TinyFaceDetectorOptions() to force it to use the Tiny Face Detector model.

<code>/* globals faceapi */
import 'https://unpkg.com/face-api.js@0.22.2/dist/face-api.min.js';
</code>
Copy after login
Copy after login
Copy after login

Variable faces now contain a result array. Each result has a box and score attribute. score indicates the degree of confidence that the neural network is indeed in the face. The box attribute contains the object of the face coordinates. We can select the first result (or we can use faceapi.detectSingleFace()), but if the user submits a group photo, we want to see all the faces in the cropped image. To do this, we can calculate the custom bounding box:

<code>function loadRandomImage() {
  const image = new Image();

  image.crossOrigin = true;

  return new Promise((resolve, reject) => {
    image.addEventListener('error', (error) => reject(error));
    image.addEventListener('load', () => resolve(image));
    image.src = 'https://source.unsplash.com/512x512/?face,friends';
  });
}
</code>
Copy after login
Copy after login

Finally, we can create a canvas and display the result:

<code>await faceapi.nets.tinyFaceDetector.loadFromUri('/models');
</code>
Copy after login

Put emoji

You can find the code for this demo in the included GitHub repository.

Why not have a little fun? We can make a filter that places a mouth emoji (?) on all eyes. To find eye feature points, we need another model. This time, we are concerned about accuracy, so we use SSD Mobilenet v1 and 68-point face feature point detection models.

Similarly, we first need to load the model and image:

<code>const image = await loadRandomImage();
const faces = await faceapi.detectAllFaces(image, new faceapi.TinyFaceDetectorOptions());
</code>
Copy after login

To get feature points, we must append the withFaceLandmarks() function call to detectAllFaces() to get feature points data:

<code>const box = {
  // 将边界设置为其反向无穷大,因此任何数字都大于/小于
  bottom: -Infinity,
  left: Infinity,
  right: -Infinity,
  top: Infinity,

  // 给定边界,我们可以计算宽度和高度
  get height() {
    return this.bottom - this.top;
  },

  get width() {
    return this.right - this.left;
  },
};

// 更新边界框
for (const face of faces) {
  box.bottom = Math.max(box.bottom, face.box.bottom);
  box.left = Math.min(box.left, face.box.left);
  box.right = Math.max(box.right, face.box.right);
  box.top = Math.min(box.top, face.box.top);
}
</code>
Copy after login

Like last time, faces contain a list of results. In addition to the position of the face, each result also contains a list of original points of feature points. To get the correct feature points for each feature, we need to slice the list of points. Because the number of points is fixed, I chose the hardcoded index:

<code>npm install face-api.js
</code>
Copy after login
Copy after login
Copy after login

Now we can start drawing emojis on the picture. Since we have to do this with both eyes, we can put feature.eyeLeft and feature.eyeRight in an array and iterate over them to execute the same code for each eye. All that's left is to draw the emoji onto the canvas!

<code>/* globals faceapi */
import 'https://unpkg.com/face-api.js@0.22.2/dist/face-api.min.js';
</code>
Copy after login
Copy after login
Copy after login

Note that I used some magic numbers to adjust the font size and precise text position. Because emojis are unicode and the typography on the web is weird (at least for me), I just adjust the numbers until they look almost the same. A more powerful alternative is to use images as overlays.

Face Detection on the Web with Face-api.js

Summary

face-api.js is a great library that makes face detection and recognition truly accessible. No need to be familiar with machine learning and neural networks. I love those tools that have enabled features, and this is definitely one of them.

From my experience, face recognition on the web can affect performance. We will have to choose between bandwidth and performance or accuracy. Smaller models are certainly less accurate and will miss a face in some of the factors mentioned earlier, such as insufficient light or face being covered.

Microsoft Azure, Google Cloud and possibly other businesses provide face detection in the cloud. Because we avoid downloading large models, cloud-based detection avoids heavy page loading and is often more accurate because it is often improved and may even be faster because the hardware is optimized. If you need high precision, you may want to consider a plan that you are satisfied with.

I definitely recommend using face-api.js for amateur projects, experiments, and maybe MVP.

FAQs (FAQ) about Face API.js

What is Face API.js and how does it work?

Face API.js is a JavaScript library that uses TensorFlow.js to perform face detection, face recognition and face feature point detection in the browser. It detects and recognizes faces in images or live video streams by using machine learning models. The library provides some APIs that allow developers to perform tasks such as detecting all faces in an image, identifying the face of a specific person, and identifying facial features such as eyes, nose, and mouth.

How to install and use Face API.js in my project?

To install Face API.js, you can use npm or yarn. Once installed, you can import the library into your project and start using its API. This library provides a comprehensive set of examples and tutorials to get you started.

Can Face API.js be used for real-time face detection and recognition?

Yes, Face API.js can be used for real-time face detection and recognition. This library provides an API that can process video streams and detect or recognize faces in real time. This makes it suitable for applications such as monitoring, video conferencing, and interactive installation.

What are the system requirements for using Face API.js?

Face API.js requires a modern browser that supports WebGL and WebAssembly. It also requires relatively powerful CPUs and GPUs, as face detection and recognition are compute-intensive tasks. However, the exact requirements will depend on the specific use case and the number of faces to be processed.

How accurate is Face API.js in detecting and identifying faces?

The accuracy of Face API.js depends on several factors, including the quality of the input image or video, lighting conditions, and the posture of the face. However, the library uses the latest machine learning model trained on large datasets, so it can achieve high precision in most conditions.

Can Face API.js detect and recognize faces under different lighting conditions and postures?

Yes, Face API.js can detect and recognize faces under various lighting conditions and postures. However, like all machine learning models, its performance may be affected by extreme lighting conditions or abnormal postures.

Can Face API.js be used for commercial projects?

Yes, Face API.js is open source and can be used for personal and commercial projects. However, it is best to check the license terms before using any open source library in a commercial project.

How to improve the performance of Face API.js in my application?

There are several ways to improve the performance of Face API.js. One way is to optimize the input image or video, for example by reducing its resolution or converting it to grayscale. Another method is to fine-tune the parameters of the face detection and recognition algorithm.

Can Face API.js recognize faces in images or videos containing multiple people?

Yes, Face API.js can detect and recognize multiple faces in the same image or video. This library provides an API to return detected face arrays, each with its own bounding box and recognition result.

Can Face API.js be used with other JavaScript libraries or frameworks?

Yes, Face API.js can be used with any JavaScript library or framework. It is designed to be flexible and easy to integrate into existing projects.

The above is the detailed content of Face Detection on the Web with Face-api.js. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template