Truefoundry can autogenerate the inference code for SkLearn and XGBoost models. In case you have already written the inference code for these models, you can deploy the FastAPI/Flask code as it is to Truefoundry. This guide will go into how to log the models, generate the inference code and deploy the code to get the model endpoint.

Truefoundry can generate inference code in two frameworks:

  1. FastAPI: This is simple to understand and use. This works quite well in case your traffic is not very high (less than 20 requests/second>)
  2. Triton: This is more performant model server and is suitable for high traffic use cases. It comes with batching support which helps provide higher througput.

It also generates the requirements.txt, Dockerfile and a README file that will help you get started with the deployment.

This approach gives you the flexibility to change the inference code to add custom business logic and makes it easier to test the code locally. You can also push the code to your git repository.

Log the model in the model registry

You will need to setup CLI before executing the following steps.

from truefoundry.ml import get_client, SklearnFramework, sklearn_infer_schema
import joblib
import numpy ass np
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC

# Define training data
X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
y = np.array([1, 1, 2, 2])

# Create and train the model
clf = make_pipeline(StandardScaler(), SVC(gamma="auto"))
model = clf.fit(X, y)

# Save the model
joblib.dump(clf, "sklearn-pipeline.joblib")

# Initialize the TrueFoundry client
client = get_client()

# Infer model schema
model_schema = sklearn_infer_schema(
    model_input=X, model=model, infer_method_name="predict"
)

# Log the model
model_version = client.log_model(
    ml_repo="my-classification-project",
    name="my-sklearn-model",
    model_file_or_folder="sklearn-pipeline.joblib",
    # To make the model deployable and generate the inference script, 
    # model file, and schema(with the method name) are required.
    framework=SklearnFramework(
        model_filepath="sklearn-pipeline.joblib",
        model_schema=model_schema,
    ),
    # Auto-captures the current environment details (e.g., python_version, pip_packages) 
    # based on the framework. If you want to override, you can add this block:
    # environment=ModelVersionEnvironment(
    #     python_version="3.10",
    #     pip_packages=[
    #         "joblib==1.4.2",
    #         "numpy==1.26.4",
    #         "pandas==2.2.3",
    #         "scikit-learn==1.6.1",
    #     ],
    # ),
)

# Output the model's Fully Qualified Name (FQN)
print(f"Model version logged successfully: {model_version.fqn}")

Generate the inference code

  • Locate the model you want to deploy in the model registry and click the Deploy button.

Select a workspace for deployment, and copy the command.

  • Execute the command in your terminal to generate the model deployment package.
 tfy deploy-init model --name 'my-sklearn-model-1' --model-version-fqn 'model:truefoundry/my-classification-project/my-sklearn-model-1:1' --workspace-fqn 'tfy-usea1-devtest:deb-ws' --model-server 'fastapi'
...
Generating application code for 'model:truefoundry/my-classification-project/my-sklearn-model-1:1'

Model Server code initialized successfully!

Code Location: /work/model-deployment/my-sklearn-model-1

Next Steps:
- Navigate to the model server directory:
cd /work/model-deployment/my-sklearn-model-1
- Refer to the README file in the directory for further instructions.

 cd /work/model-deployment/my-sklearn-model-1
 ls
README.md               deploy.py               infer.py                requirements.txt        server.py
  • Follow the instructions present on theREADME.md to deploy the code and get an endpoint for the model.

Common Issues and FAQ