Learn how to deploy a face blurring algorithm at scale with Syndicai platform without any configuration and infrastructure setup.
Our face is the most fundamental and highly visible element of our identity. People recognize us when they see our face or a photo of our face. According to GDPR, a European Union regulatory law, face images are categorized as sensitive data and need to be protected.
However, protecting visual data is not a trivial thing and we are not really aware of how important it is. Talking about private data we are thinking about GPS and cookies in most cases, while images are not really relevant. For instance, playing with amazing facebook facial filters, nobody care about the fact that those videos are stored somewhere. We only think about nice look at that time ;)
As the amount of data processed increases, we must think about our privacy. From the technological point of view, there are some tools and algorithms that helps to keep data private when used by AI or consumed by marketing platforms. One of them is face blurring algorithm that we will try to explore in the following tutorial.
After going through development, deployment and later integration phase in the following article you will have a basic understanding of how to easily deploy a face blurring model into production.
💡 Explore: If you are interested in the traditional AI model You can also explore the tutorial on how to deploy yolov5 model or how to deploy deoldify model.
Step 1: Develop a super resolution model
The main goal of the following step is to build and train a model, in our case it is face blurring algorithm, and later upload the code on GitHub.
The idea of the algorithm is to anonymize the face by blurring it, thereby making it impossible to identify the face. Such an algorithm could be applied for privacy and identity protection in public/private areas, protecting children online, photo journalism and news reporting and many more. A model takes image or video with people as input, recognize faces and draw a blurred rectangle on the face so that the person is hard to recognize.
In the following tutorial we will use the implementation written in OpenCV by Adrian Rosebrock that uses Gaussian blur. The whole pipeline is pretty straight forward. First we need to perform facial recognition, later crop the space with the face, apply blur and finally store the blurred face back in the original image.
Since we don't have to train anything, our model is ready to go. We just need to upload the code to your git repository before we go the next step.
Step 2: Deploy a super resolution model
Model is trained so in the next step we will prepare that model and connect repo to the platform.
AI model deployment is highly dependent on the use-case. In our tutorial we will deploy a face blurring model using Syndicai platform that allows us easily deliver our model to production in a secure and scalable way.
💡 Explore: Check out the article about AI model deployment if you want to learn about different types of AI model delivery to production.
Prepare a model
Our model is already trained and uploaded to the Git repository. Now, we need to somehow define how the model will interact with input / output data when deployed as a webservice.
However, we will not create a webservice, because Syndicai will do it for us. The only thing that we need to do is to create additional syndicai.py and requirements.txt files placing them in the main directory at the end.
The first file, syndicai.py, is the python script that consists of PythonPredictor class. It is responsible for taking the input, parsing through the model and sending response.
In this case, both input and output are in the base64 format, and the content of the file looks as follows.
import os import io import base64 import cv2 import numpy as np
from PIL import Image from imageio import imread from pyimagesearch.face_blurring import anonymize_face_pixelate from pyimagesearch.face_blurring import anonymize_face_simple
def __init__(self, config): # load our serialized face detector model from disk print("[INFO] loading face detector model...") prototxtPath = os.path.sep.join([args["face"], "deploy.prototxt"]) weightsPath = os.path.sep.join([args["face"], "res10_300x300_ssd_iter_140000.caffemodel"]) self.net = cv2.dnn.readNet(prototxtPath, weightsPath)
def predict(self, payload): # load the input image from disk, clone it, and grab the image spatial # dimensions img = imread(io.BytesIO(base64.b64decode(payload["base64"]))) # numpy array (width, hight, 3) image = cv2.cvtColor(img, cv2.COLOR_RGB2BGR) orig = image.copy() (h, w) = image.shape[:2]
# construct a blob from the image blob = cv2.dnn.blobFromImage(image, 1.0, (300, 300), (104.0, 177.0, 123.0))
# pass the blob through the network and obtain the face detections print("[INFO] computing face detections...") self.net.setInput(blob) detections = self.net.forward()
# loop over the detections for i in range(0, detections.shape): # extract the confidence (i.e., probability) associated with the # detection confidence = detections[0, 0, i, 2]
# filter out weak detections by ensuring the confidence is greater # than the minimum confidence if confidence > args["confidence"]: # compute the (x, y)-coordinates of the bounding box for the # object box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int")
# extract the face ROI face = image[startY:endY, startX:endX]
# check to see if we are applying the "simple" face blurring # method if args["method"] == "simple": face = anonymize_face_simple(face, factor=3.0)
# otherwise, we must be applying the "pixelated" face # anonymization method else: face = anonymize_face_pixelate(face, blocks=args["blocks"])
# store the blurred face in the output image image[startY:endY, startX:endX] = face
Remember that your model has to be Active in order to work! As the output, you should also get base64 with blurred images.
If you get a correct response you are ready to go with the model integration. Go to the model Integrate page and use the code snippet to implement the REST API in your website, mobile app, or some platform.
For the purpose of this tutorial we have already created a template with a sample React app. It allows you very easily interact with your deployed model. You can try it on the Syndicai Showcase page and get a feeling of that great experience.
In addition, you can fork the repository with the showcase page, because the whole code is open-sourced.
In summary, in the above tutorial you had a chance to deploy a face blurring algorithm on the syndicai platform without any infrastructure setup and webservice configuration.
The main goal of that tutorial was to show you a faster and simpler way of AI model delivery to production in the scalable way.
* * *
If you found that material helpful, have some comments, or want to share some ideas for the next one - don't hesitate to drop us a line via slack or mail. We would love to hear your feedback!