via Python SDK
What you'll learn
- Creating a FastAPI service to serve an ML model
- Deploying our service via
servicefoundry
This is a guide to deploy a scikit-learn model via FastAPI and servicefoundry
.
After you complete the guide, you will have a successfully deployed FastAPI Service. Your deployed FastAPI Service will look like this:
Project structure
To complete this guide, you are going to create the following files:
app.py
: contains our inference and FastAPI codeiris_classifier.joblib
: the model filedeploy.py
: contains our deployment coderequirements.txt
: contains our dependencies
Your final file structure is going to look like this:
.
βββ app.py
βββ iris_classifier.joblib
βββ deploy.py
βββ requirements.txt
As you can see, all the following files are created in the same folder/directory
Model details
For this guide, we have already trained a model.
The given model has been trained on iris dataset. Then it is stored as a joblib file in google drive.
Attributes :
sepal length in cm, sepal width in cm, petal length in cm, petal width in cmPredicted Attribute :
class of iris plant (one of the following - Iris Setosa, Iris Versicolour, Iris Virginica)
Step 1: Fetching the model
Download the model from the following link.
Then move the model in your development directory.
Afterwards, your directory should look like this :
.
βββ iris_classifier.joblib
Step 2: Implement the inference service code.
The first step is to create a web API and deploy the model.
For this we are going to use FastAPI for this. FastAPI is a modern, intuitive web framework for building web APIs in python.
Create the app.py
and requirements.txt
files in the same directory where the model is stored.
.
βββ iris_classifier.joblib
βββ app.py
βββ requirements.txt
app.py
app.py
import os
import joblib
import pandas as pd
from fastapi import FastAPI
model = joblib.load("iris_classifier.joblib")
app = FastAPI(root_path=os.getenv("TFY_SERVICE_ROOT_PATH"))
@app.post("/predict")
def predict(
sepal_length: float, sepal_width: float, petal_length: float, petal_width: float
):
data = dict(
sepal_length=sepal_length,
sepal_width=sepal_width,
petal_length=petal_length,
petal_width=petal_width,
)
prediction = int(model.predict(pd.DataFrame([data]))[0])
return {"prediction": prediction}
requirements.txt
requirements.txt
fastapi
joblib
numpy
pandas
scikit-learn
uvicorn
Step 3: Deploying the inference API
You can deploy services on Truefoundry programmatically via our Python SDK.
Create a deploy.py
, after which our file structure will look like this:
File Structure
.
βββ iris_classifier.joblib
βββ app.py
βββ deploy.py
βββ requirements.txt
deploy.py
deploy.py
import argparse
import logging
from servicefoundry import Build, PythonBuild, Service, Resources, Port
logging.basicConfig(level=logging.INFO)
parser = argparse.ArgumentParser()
parser.add_argument("--workspace_fqn", required=True, type=str)
parser.add_argument("--host", required=True, type=str)
args = parser.parse_args()
service = Service(
name="fastapi",
image=Build(
build_spec=PythonBuild(
command="uvicorn app:app --port 8000 --host 0.0.0.0",
requirements_path="requirements.txt",
)
),
ports=[
Port(
port=8000,
host=args.host,
)
],
resources=Resources(
cpu_request=0.5,
cpu_limit=1,
memory_request=1000,
memory_limit=1500
),
env={
"UVICORN_WEB_CONCURRENCY": "1",
"ENVIRONMENT": "dev"
}
)
service.deploy(workspace_fqn=args.workspace_fqn)
Picking a value for
host
Providing a host value depends on the base domain urls configured in the cluster settings, you can learn how to find the base domain urls available to you here
For e.g. If your base domain url is
*.truefoundry.your-org.com
then a valid value can befastapi-your-workspace-8000.truefoundry.your-org.com
.Alternatively if you have a non wildcard based domain url e.g.
truefoundry.your-org.com
, then a valid value can betruefoundry.your-org.com/fastapi-your-workspace-8000
To deploy using Python API use:
python deploy.py --workspace_fqn <YOUR WORKSPACE FQN HERE> --host <YOUR HOST>
Run the above command from the same directory containing the
app.py
andrequirements.txt
files.
.tfyignore files
If there are any files you don't want to be copied to the workspace, like a data file, or any redundant files. You can use .tfyignore files in that case.
After you run the command given above, your deployment process will start. Wait for it to show status of SUCCESS
Congratulations! You have successfully deployed your FastAPI Service.
You will get a link at the end of the output. The link will take you to your application's dashboard.
Interacting with your Service.
Once your service has been deployed successfully, you can begin making requests to it
- Click on your specific service within the dashboard. This will open the dedicated dashboard for your service.
In the dashboard, you'll find the endpoint URL for your service. This endpoint is where your deployed service can be accessed, allowing you to interact with your deployed machine learning model. - Copy this endpoint URL; you'll need it to make requests.

- You can now use this endpoint URL to make predictions. For example, if you want to predict classes based on the following data:
sepal_length | sepal_width | petal_length | petal_width |
---|---|---|---|
7.0 | 3.2 | 4.7 | 1.4 |
- Here's a Python code snippet to send a request with the above data using the endpoint URL:
import json
from urllib.parse import urljoin
import requests
# Replace this with the value of your endpoint URL
ENDPOINT_URL = "\<YOUR_ENDPOINT_URL>" # e.g., <https://your-service-endpoint.com/>
response = requests.post(
urljoin(ENDPOINT_URL, 'predict'),
json={
"sepal_length": 7.0,
"sepal_width": 3.2,
"petal_length": 4.7,
"petal_width": 1.4,
}
)
result = response.json()
print("Predicted Classes:", result["prediction"])
Running this code will provide you with the predicted classes.
Predicted Classes: 0
Updated 19 days ago