How To Compare Two Faces For Similarity Using Face-api-js In ReactJs
Face recognition with Face-api-js in Reactjs application
In one of my recent projects, I had to build a KYC-related functionality. Something that compares the face on the customer’s uploaded identification card with the face on a selfie shot through the app. There are various software and libraries for facial recognition, but after researching them, I chose to utilise the face-api.js library because it is open-source and easy to use on the browser.
In this article, I will demonstrate how you can use the face-api.js JavaScript library to detect and compare faces for similarities.
Prerequisites
- This article assumes the reader has a basic understanding of ReactJs, and React hooks (useEffect, useRef, etc).
- Any Node package manager (npm, yarn, etc).
- A basic understanding of what a machine learning model is.
What is face-api-js?
Face-api.js is a JavaScript API built on TensorFlow for face detection and recognition in browsers and NodeJs. It implements Convolutional Neural Networks but does not require you to train any model.
Getting Started
To prevent the article from getting too long, I won’t delve into all the specifics of taking a picture or uploading an image with Reactjs. We’ll have just two basic image elements in our programme. One is for the selfie image, and the other is for the ID card image.
We’ll start by creating a new React application called regtech using the npx command below.
$ npx create-react-app regtech
The next step is installing face-api.js inside the project directory using any Node package managers.
$ npm install face-api.js
We’re going to remove all the boilerplate codes and replace the contents of App.js with just two image elements and create two variables for accessing the elements directly using the useRef hook.
import { useRef } from 'react';
import './App.css';
function App() {
const idCardRef = useRef();
const selfieRef = useRef();
return (
<>
<div className="gallery">
<img ref={idCardRef} src={require('./images/id-card.png')} alt="ID card" height="auto" />
</div>
<div className="gallery">
<img ref={selfieRef} src={require('./images/selfie.webp')} alt="Selfie" height="auto" />
</div>
</>
);
}
export default App;
Loading the Models
Kindly visit the face-api.js GitHub repository and download all the models from here. We’ll create a models directory inside the public directory of our project and copy all the downloaded files into it.
The next step is to import face-api.js into our code and then load the models asynchronously.
import { useEffect, useRef } from 'react';
import * as faceapi from 'face-api.js';
import './App.css';
function App() {
...
useEffect(() => {
(async () => {
await faceapi.nets.ssdMobilenetv1.loadFromUri('/models');
await faceapi.nets.tinyFaceDetector.loadFromUri('/models');
await faceapi.nets.faceLandmark68Net.loadFromUri('/models');
await faceapi.nets.faceRecognitionNet.loadFromUri('/models');
await faceapi.nets.faceExpressionNet.loadFromUri('/models');
})();
}, []);
return (
...
}
export default App;
Face detection
Before we compare the faces from the two images for similarities, we’ll have to first check if the images have human faces on them using the faceapi.detectSingleFace()
method. The faceapi.detectSingleFace()
method in our code sample below has one required argument/parameter for the ‘input’ image and a second optional argument/parameter for specifying a separate face detector other than the ‘SSD Mobilenet V1 Face Detector’ it uses by default. You can check the face-api.js documentation for more details.
(optional) If faces were detected by the model, we’d replace the input images in the image elements with the detected faces. I wrote a renderFace()
method for doing just that.
import { useEffect, useRef } from 'react';
import * as faceapi from 'face-api.js';
import './App.css';
function App() {
const idCardRef = useRef();
const selfieRef = useRef();
const isFirstRender = useRef(true);
const renderFace = async (image, x, y, width, height) => {
const canvas = document.createElement("canvas");
canvas.width = width;
canvas.height = height;
const context = canvas.getContext("2d");
context?.drawImage(image, x, y, width, height, 0, 0, width, height);
canvas.toBlob((blob) => {
image.src = URL.createObjectURL(blob);
}, "image/jpeg");
};
useEffect(() => {
// Prevent the function from executing on the first render
if (isFirstRender.current) {
isFirstRender.current = false; // toggle flag after first render/mounting
return;
}
(async () => {
// loading the models
await faceapi.nets.ssdMobilenetv1.loadFromUri('/models');
await faceapi.nets.tinyFaceDetector.loadFromUri('/models');
await faceapi.nets.faceLandmark68Net.loadFromUri('/models');
await faceapi.nets.faceRecognitionNet.loadFromUri('/models');
await faceapi.nets.faceExpressionNet.loadFromUri('/models');
// detect a single face from the ID card image
const idCardFacedetection = await faceapi.detectSingleFace(idCardRef.current,
new faceapi.TinyFaceDetectorOptions())
.withFaceLandmarks().withFaceDescriptor();
// detect a single face from the selfie image
const selfieFacedetection = await faceapi.detectSingleFace(selfieRef.current,
new faceapi.TinyFaceDetectorOptions())
.withFaceLandmarks().withFaceDescriptor();
//(OPTIONAL)
/**
* If a face was detected from the ID card image,
* call our renderFace() method to display the detected face.
*/
if (idCardFacedetection) {
const { x, y, width, height } = idCardFacedetection.detection.box;
renderFace(idCardRef.current, x, y, width, height);
}
//(OPTIONAL)
/**
* If a face was detected from the selfie image,
* call our renderFace() method to display the detected face.
*/
if (selfieFacedetection) {
const { x, y, width, height } = selfieFacedetection.detection.box;
renderFace(selfieRef.current, x, y, width, height);
}
})();
}, []);
return (
<>
...
</>
);
}
export default App;
If we run our code, we’ll have this;
Face Comparison/Similarity
The face-api.js uses Euclidean distance to determine the similarity between face descriptors. When you detect a face using the faceapi.detectSingleFace()
method, it returns an object with a descriptor of the detected face. The descriptors of the two faces you’re trying to compare will be passed as parameters to the faceapi.euclideanDistance()
method. The method returns a distance which will help you determine if the faces are similar to each other or not.
The lower the distance, the more likely the faces to be similar to each other. In our case, it returned 0.5029092815091631. You can log the distance in the console or use it anywhere you want in the app.
...
/**
* Do face comparison only when faces were detected
*/
if(idCardFacedetection && selfieFacedetection){
// Using Euclidean distance to comapare face descriptions
const distance = faceapi.euclideanDistance(idCardFacedetection.descriptor, selfieFacedetection.descriptor);
console.log(distance);
}
...
The link to the entire code can be found here on GitHub.
Conclusion
In this article, we used the face-api.js JavaScript library for detecting faces from images and comparing them for similarities. I hope you found this useful and that it helped you in some way.
Credits
- face-api.js documentation: https://justadudewhohacks.github.io/face-api.js/docs/index.html
- face-api.js GitHub repository: https://github.com/justadudewhohacks/face-api.js/
- https://arnav25.medium.com/how-to-face-api-in-react-953cfc70d6d