Deploy a model via Gradio

👍

What you'll learn

  • Creating a Gradio application to serve your model
  • Deploying our service via `servicefoundry

This is a guide to deploy a scikit-learn model via Gradio and servicefoundry

After you complete the guide, you will have a successfully deployed model. Your deployed Gradio application will look like this:

Project structure

To complete this guide, you are going to create the following files:

  • app.py: contains our inference and Gradio code
    -iris_classifier.joblib: the model file
  • deploy.py/deploy.yaml: contains our deployment code / deployment configuration. (Depending on whether you choose to use our python SDK or create a YAML file)
  • requirements.txt: contains our dependencies.

Your final file structure is going to look like this:

.
├── app.py
├── iris_classifier.joblib
├── deploy.py / deploy.yaml
└── requirements.txt

As you can see, all the following files are created in the same folder/directory.

Model details

For this guide, we have already trained a model.
The given model has been trained on the Iris dataset. Then it is stored as a joblib file in google drive.

Attributes :
sepal length in cm, sepal width in cm, petal length in cm, petal width in cm

Predicted Attribute :
class of iris plant (one of the following - Iris Setosa, Iris Versicolour, Iris Virginica)

Step 1: Fetching the model

We will use gdown to fetch the model from google drive. You can install it with pip install gdown command.

After this, enter the following command in your terminal :

gdown https://drive.google.com/file/d/1-9nwjs6F7cp_AhAlBAWZHMXG8yb2q_LR/view -O iris_classifier.joblib

Afterwards, your directory should be like this :

.
└── iris_classifier.joblib

Step 2: Implement the inference service code.

The first step is to create a web Interface and deploy the model.
For this, we are going to use Gradio for this. Gradio is a python library using which we can quickly create a web interface on top of our model inference functions.

Create the app.py and requirements.txt files in the same directory where the model is stored.

.
├── iris_classifier.joblib
├── app.py
└── requirements.txt

app.py

Click on the Open Recipe below to understand the app.py:

requirements.txt

gradio==3.2
scikit-learn==1.0.2
joblib

Step 3: Deploying the inference API

You can deploy services on TrueFoundry programmatically either using our Python SDK, or via a YAML file.

So now you can choose between either creating a deploy.py file, which will use our Python SDK.
Or you can choose to create a deploy.yaml configuration file and then use the servicefoundry deploy command

Via python SDK

File Structure

.
├── iris_classifier.joblib
├── app.py
├── deploy.py
└── requirements.txt

Follow the recipe below to understand the deploy.py file :

To deploy using Python API use:

python deploy.py

Via YAML file

File Structure

.
├── iris_classifier.joblib
├── app.py
├── deploy.yaml
└── requirements.txt

Follow the recipe below to understand the deploy.yaml code:

With YAML you can deploy the inference API service using the command below:

servicefoundry deploy --workspace-fqn YOUR_WORKSPACE_FQN --file deploy.yaml

Run the above command from the same directory containing the app.py and requirements.txt files.

📘

.tfyignore files

If there are any files you don't want to be copied to the workspace, like a data file, or any redundant files. You can use .tfyignore files in that case.

End result

You can go to your deployments dashboard here and you will find a new deployment created with the name you provided.

Afterwards, click on the service you just deployed. On the top-right corner you will see the endpoint of deployed application.

Click on the link here, you will redirected to your deployed application.

More Details

  • Learn more about the build process here
  • Learn more about how to inject environment variables to your deployments here
  • Learn more about how to use secrets here

Examples

See the following projects which use truefoundry for deployment.