Deploy a Dockerized Service

👍

What you'll learn

  • Creating a Gradio application to serve your model
  • Dockerizing the Gradio application
  • Deploying our dockerized application via servicefoundry

This is a guide to deploy a scikit-learn model via Dockerfile and servicefoundry.
For this, we are going to create a Gradio application and dockerize it.

After you complete the guide, you will have a successfully deployed model. Your deployed Gradio application would look like the following:

Project structure

To complete this guide, you are going to create the following files:

  • app.py: contains our inference and Gradio app code
  • Dockerfile: contains our docker image build instructions
  • iris_classifier.joblib: the pre-trained model file
  • deploy.py/deploy.yaml: contains our deployment code / deployment configuration. (Depending on whether you choose to use our python SDK or create a YAML file)
  • requirements.txt: contains the dependencies.

Your final file structure is going to look like this:

.
├── app.py
├── iris_classifier.joblib
├── Dockerfile
├── deploy.py / deploy.yaml
└── requirements.txt

As you can see, all the following files are created in the same folder/directory.

Model details

For this guide we have already trained a model.
The given model has been trained on Iris dataset. Then it is stored as a joblib file in google drive.

Attributes :
sepal length in cm, sepal width in cm, petal length in cm, petal width in cm

Predicted Attribute :
class of iris plant (one of the following - Iris Setosa, Iris Versicolour, Iris Virginica)

Step 1: Fetching the model

We will use gdown to fetch the model from google drive. You can install it with pip install gdown

For this first cd into the directory, and then enter the following command in your terminal :

gdown https://drive.google.com/file/d/1-9nwjs6F7cp_AhAlBAWZHMXG8yb2q_LR/view

Afterwards, your directory should look like this :

.
└── iris_classifier.joblib

Step 2: Implement the inference service code.

The first step is to create a web Interface and deploy the model.
For this, we are going to use Gradio for this. Gradio is a python library using which we can quickly create web interface on top of our model inference functions.

Create the app.py and requirements.txt files in the same directory where the model is stored.

.
├── iris_classifier.joblib
├── app.py
└── requirements.txt

app.py

Click on the Open Recipe below to understand the app.py:

requirements.txt

gradio==3.2
scikit-learn==1.1.2
joblib

Step 3: Dockerize the gradio application

Now we will create the Dockerfile for the gradio application.

.
├── iris_classifier.joblib
├── app.py
├── Dockerfile
└── requirements.txt

Dockerfile

The Dockerfile contains instructions to build the image.

Step 4: Deploying the inference API

You can deploy services on TrueFoundry programmatically either using our Python SDK, or via a YAML file.

So now you can choose between creating a deploy.py file, which will use our Python SDK.
Or you can choose to create a deploy.yaml configuration file and then use the servicefoundry deploy command

Via python SDK

File Structure

.
├── iris_classifier.joblib
├── app.py
├── deploy.py
├── Dockerfile
└── requirements.txt

Follow the recipe below to understand the deploy.py file :

To deploy using Python API use:

python deploy.py

Via yaml file

File Structure

.
├── iris_classifier.joblib
├── app.py
├── deploy.yaml
├── Dockerfile
└── requirements.txt

Follow the recipe below to understand the deploy.yaml code:

With YAML you can deploy the inference API service using the command below:

servicefoundry deploy --workspace-fqn YOUR_WORKSPACE_FQN --file deploy.yaml

Run the above command from the same directory containing the app.py and requirements.txt files.

📘

.tfyignore files

If there are any files you don't want to be copied to the workspace, like a data file, or any redundant files. You can use .tfyignore files in that case.

End result

You can go to your deployments dashboard here and you will find a new deployment created with the name you provided.

Afterwards, click on the service you just deployed. On the top-right corner you will see the endpoint of deployed application.

Click on the link here, you will redirected to your deployed application.

More Details

  • Learn more about the build process here
  • Learn more about how to inject environment variables to your deployments here
  • Learn more about how to use secrets here

Examples

See the following projects which use TrueFoundry for deployment.