As we all know AI is the major driving force behind many of the applications we see today. However, in order to put that AI to those applications, we need to go through an Exploratory Data Analysis (EDA) phase, and when a model is working deploy & integrate it with the app. When it comes to the last step (deployment & integration) it is not an easy task if you want to make it ready for production.
What if there would be an easy and fast way to go from a trained model to actual use-case with a click of a button? What if that process would be a production-ready one?
In this article, we will show you how to deploy a TensorFlow model in three quick steps.
Neural style transfer is an optimization technique that takes as input two images (a content image with style reference one) and combines them together so the output image looks like it's “painted” in the style of the reference image.
The newest approach presented in the paper investigates a method that combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks so that the algorithm is much faster, and can work in real-time.
We have already prepared the Style Transfer model on our GitHub repository. You don’t have to do anything with the repository for now.
Step 2: Deploy a Tensorflow model
In the traditional workflow of Machine Learning model deployment, you need to go through several steps (create a webservice, build a docker and serve on Kubernetes cluster).
Syndicai takes care of all those steps. You just need to connect the git repository with your model and the REST API will be created automatically with one click. Moreover, Syndicai takes care of the scalability of resources. The resulting API offers great flexibility because you can connect it to any device.
# Stylize content image with given style image. outputs = self.hub_module(tf.constant(content_image), tf.constant(style_image)) stylized_image = outputs
# get PIL image and convert to base64 img = Image.fromarray(np.uint8(stylized_image.numpy() * 255)) im_file = io.BytesIO() img.save(im_file, format="PNG") im_bytes = base64.b64encode(im_file.getvalue()).decode("utf-8")
These two files are necessary for the Syndicai to be able to recreate the environment and know which function to use for prediction.
Connect the repository to Syndicai
When we have the GitHub repository with requirements.txt and syndicai.py ready, we can proceed to connect it to the Syndicai platform. In order to that, go to https://syndicai.co/, login, click New Model on the Overview page. You will be redirected to the quick form. Follow the steps, and as soon as you finish, the infrastructure will start building. You will need to wait a couple of minutes for the model to become Active.
For more information about the model preparation or deployment process go to Syndicai Docs.
Step 3: Integrate a Tensorflow model
You've done it!
Your model is deployed, and your REST API is ready. In order to perform a quick test, just copy & paste a sample input script in the model run section.
Remember that model needs to be Active to make it work!
If everything works fine, you can now connect the API with any device or service. As an example, you can go to the Showcase page and explore the sample implementation of the model.
You had a chance to see how to deploy tensorflow model in minutes. Syndicai allows you to deploy and integrate AI models at scale in a simple and fast way. You don't need to setup the infrastructure or take care of scalability, Syndicai Platform will do it for you.
* * *
If you found that material helpful, have some comments, or want to share some ideas for the next one - don't hesitate to drop us a line via slack or mail. We would love to hear your feedback!