As we all know AI is the major driving force behind many of the applications we see today. However, in order to put that AI to those applications, we need to go through an Exploratory Data Analysis (EDA) phase, and when a model is working deploy & integrate it with the app. When it comes to the last step (deployment & integration) it is not an easy task if you want to make it ready for production.

What if there would be an easy and fast way to go from a trained model to actual use-case with a click of a button? What if that process would be a production-ready one?

In this article, we will show you how to deploy a TensorFlow model in three quick steps.

Step 1: Develop a Tensorflow model

For the purpose of this tutorial, we will deploy a Golnaz and Honglak implementation of the Style Transfer (also called Neural Style Transfer).

Neural style transfer is an optimization technique that takes as input two images (a content image with style reference one) and combines them together so the output image looks like it's “painted” in the style of the reference image.

The newest approach presented in the paper investigates a method that combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks so that the algorithm is much faster, and can work in real-time.

style transfer deployment
The input of the model is Content image and Style reference, while the output is the combination both images.

We have already prepared the Style Transfer model on our GitHub repository. You don’t have to do anything with the repository for now.

Step 2: Deploy a Tensorflow model

In the traditional workflow of Machine Learning model deployment, you need to go through several steps (create a webservice, build a docker and serve on Kubernetes cluster).

Syndicai takes care of all those steps. You just need to connect the git repository with your model and the REST API will be created automatically with one click. Moreover, Syndicai takes care of the scalability of resources. The resulting API offers great flexibility because you can connect it to any device.

deploy a model with flask, docker and kubernetes
AI model deployment in the traditional way (on top) vs Syndicai way (on the bottom)

You can also try out to deploy a Keras model.

Prepare a repository

When model Apart from putting your model in the GitHub repository, you have to upload two additional files there: requirements.txt and

requirements.txt – a file with all libraries and frameworks needed to recreate model’s environment

tensorflow-hub==0.10.0 – main file with the PythonPredictor class responsible for model prediction.

import os
import io
import base64
import functools

from PIL import Image
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub

from helpers import *

class PythonPredictor:

def __init__(self, config):
# Define style image
self.style_image_url = ''
# Import TF-Hub module
hub_handle = ''
self.hub_module = hub.load(hub_handle)

def predict(self, payload):
# Define content image
content_image_url = payload["url"]

# Load images
content_img_size = (500, 500)
style_img_size = (300, 300)

style_image = load_image(self.style_image_url, style_img_size)
content_image = load_image(content_image_url, content_img_size)
style_image = tf.nn.avg_pool(
style_image, ksize=[3, 3], strides=[1, 1], padding='SAME')

# Stylize content image with given style image.
outputs = self.hub_module(tf.constant(content_image),
stylized_image = outputs[0]

# get PIL image and convert to base64
img = Image.fromarray(np.uint8(stylized_image.numpy()[0] * 255))
im_file = io.BytesIO(), format="PNG")
im_bytes = base64.b64encode(im_file.getvalue()).decode("utf-8")

return im_bytes

These two files are necessary for the Syndicai to be able to recreate the environment and know which function to use for prediction.

Connect the repository to Syndicai

When we have the GitHub repository with requirements.txt and ready, we can proceed to connect it to the Syndicai platform. In order to that, go to, login, click New Model on the Overview page. You will be redirected to the quick form. Follow the steps, and as soon as you finish, the infrastructure will start building. You will need to wait a couple of minutes for the model to become Active.

style transfer showcase
Deployed Style Transfer model should have build status "success" and badge "Active" next to the name

For more information about the model preparation or deployment process go to Syndicai Docs.

Step 3: Integrate a Tensorflow model

You've done it!

Your model is deployed, and your REST API is ready. In order to perform a quick test, just copy & paste a sample input script in the model run section.

"url": ""

Remember that model needs to be Active to make it work!

If everything works fine, you can now connect the API with any device or service. As an example, you can go to the Showcase page and explore the sample implementation of the model.

Style Transfer implementation:


You had a chance to see how to deploy tensorflow model in minutes. Syndicai allows you to deploy and integrate AI models at scale in a simple and fast way. You don't need to setup the infrastructure or take care of scalability, Syndicai Platform will do it for you.

* * *

If you found that material helpful, have some comments, or want to share some ideas for the next one - don't hesitate to drop us a line via slack or mail. We would love to hear your feedback!

You might like these

Deploy yolov5 model in a few simple clicks

February 2, 2022
Michał Zmysłowski

Train yolov5. A quick guide from a model to the actual use case.

February 2, 2022
Michał Zmysłowski