For Sklearn models, Truefoundry provides a seamless deployment experience by generating FastAPI / Triton code for a logged SkLearn Model.

Log the SkLearn model

Below is an example of logging a model trained using Scikit-learn:

from truefoundry.ml import get_client, SklearnFramework, sklearn_infer_schema
import joblib
import numpy ass np
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC

# Define training data
X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
y = np.array([1, 1, 2, 2])

# Create and train the model
clf = make_pipeline(StandardScaler(), SVC(gamma="auto"))
model = clf.fit(X, y)

# Save the model
joblib.dump(clf, "sklearn-pipeline.joblib")

# Initialize the TrueFoundry client
client = get_client()

# Infer model schema
model_schema = sklearn_infer_schema(
    model_input=X, model=model, infer_method_name="predict"
)

# Log the model
model_version = client.log_model(
    ml_repo="my-classification-project",
    name="my-sklearn-model",
    model_file_or_folder="sklearn-pipeline.joblib",
    # To make the model deployable and generate the inference script, 
    # model file, and schema(with the method name) are required.
    framework=SklearnFramework(
        model_filepath="sklearn-pipeline.joblib",
        model_schema=model_schema,
    ),
    # Auto-captures the current environment details (e.g., python_version, pip_packages) 
    # based on the framework. If you want to override, you can add this block:
    # environment=ModelVersionEnvironment(
    #     python_version="3.10",
    #     pip_packages=[
    #         "joblib==1.4.2",
    #         "numpy==1.26.4",
    #         "pandas==2.2.3",
    #         "scikit-learn==1.6.1",
    #     ],
    # ),
)

# Output the model's Fully Qualified Name (FQN)
print(f"Model version logged successfully: {model_version.fqn}")

View and manage models in Model Registry

  • Access framework details like serialization format, model schema, and inference method.

Deploy the model

Once the model is deployable, you can start the deployment flow directly using the CLI.

Navigate to the Model Registry

  • Locate the desired model in the list and click on the Deploy button
  • Select the workspace for deployment, then click the copy icon to use the generated CLI command and initialize the model deployment package.

This will generate the code, which you can push to your Git Repository and then deploy a service from the git repo.


Common Model Deployment Issues and Troubleshooting Guide