Deploying a Job

You can deploy jobs on Truefoundry using our Python SDK, or via a YAML file or using our UI.

In this guide, we will deploy a training job where we train a classifier for sklearn iris dataset and log it using Truefoundry's Model Registry.

You can find the complete code in this example here


Before we start, we will need:

  1. Our Deployments SDK - servicefoundry. You can follow the instructions here to install and set it up.

  2. A Workspace FQN - We can use an existing workspace or create one from the Workspaces page. If you already have a Workspace you can use that. Copy and note down the workspace FQN.

  3. Since we are pushing our model to Truefoundry Model Registry we will need to add our Truefoundry API Key as a Secret.

    1. Create and copy an API Key from the Settings page.

    2. Visit Secrets dashboard and Create a new Secret Group.

    3. Create a new Secret in this Secret group and Paste your API Key from Step 1.

    4. Once saved, note down the Secret FQN by clicking the Copy button beside the value. It would look like following: <username>:<secret-group-name>:<secret-name> (E.g. user:iris-train-job:MLF_API_KEY)

Code and Dependencies

We will continue working with the example we introduced in Job Introduction. We start with a requirements.txt with our dependencies and a containing our training code:

└── requirements.txt



# for experiment tracking and model registry

# for deploying our job deployments

This file fetches the data, trains the model and pushes it to model registry.

import mlfoundry
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.metrics import classification_report

X, y = load_iris(as_frame=True, return_X_y=True)
X = X.rename(columns={
        "sepal length (cm)": "sepal_length",
        "sepal width (cm)": "sepal_width",
        "petal length (cm)": "petal_length",
        "petal width (cm)": "petal_width",

# NOTE:- You can pass these configurations via command line
# arguments, config file, environment variables.
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42, stratify=y
pipe = Pipeline([("scaler", StandardScaler()), ("svc", SVC())]), y_train)
print(classification_report(y_true=y_test, y_pred=pipe.predict(X_test)))

# Here we are using Truefoundry's Model Registry, you can push model to any storage 
run = mlfoundry.get_client().create_run(project_name="iris-classification")
model_version = run.log_model(
    description="SVC model trained on initial data",
print(f"Logged model: {model_version.fqn}")

Deploying as a Job

We can either deploy using the python APIs or we can deploy using a YAML file and the servicefoundry deploy command.

Create file in the same directory containing the and requirements.txt files. Replace <YOUR_SECRET_FQN>, <YOUR_WORKSPACE_FQN> with the actual values.
├── requirements.txt

# Replace `<YOUR_SECRET_FQN>` with the actual value.
import logging
import argparse
from servicefoundry import Build, Job, PythonBuild

parser = argparse.ArgumentParser()
parser.add_argument("--workspace_fqn", type=str, required=True, help="fqn of the workspace to deploy to")
args = parser.parse_args()

# First we define how to build our code into a Docker image
image = Build(
job = Job(
    env={"MLF_API_KEY": "<YOUR_SECRET_FQN>"}
#Create a `servicefoundry.yaml` file  in the same directory containing the `` and `requirements.txt` files. Replace `<YOUR_SECRET_FQN>`, `<YOUR_WORKSPACE_FQN>`  with the actual values.
#├── requirements.txt
#└── servicefoundry.yaml
# Replace `<YOUR_SECRET_FQN>`, with the actual values.

name: iris-train-job
- name: iris-train-job
  type: job
    type: build
      type: local
      type: tfy-python-buildpack
      command: python
      requirements_path: requirements.txt
    MLF_API_KEY: "tfy-secret://<YOUR_SECRET_FQN>"

To deploy the job with Python API run and and provide the workspace FQN:

python --workspace_fqn <YOUR_WORKSPACE_FQN>

To deploy the training job with YAML use the command below

servicefoundry deploy --workspace-fqn <YOUR_WORKSPACE_FQN>


Run the above command from the same directory containing the and requirements.txt files.

On successful deployment, the Job will be created and run immediately.

We can now visit our Applications page to check Build status, Build Logs, Runs History and monitor progress of runs. See Monitoring and Debugging guide for more details.

Configuring a Job to not run immediately

By default a job will run immediately after deployment. We can also configure a Job to not run immediately once it is deployed.

# We set `Manual(run=False)` as the trigger for our Job

from servicefoundry import Job, Schedule, Manual

job = Job(
# We set `type: manual` and `run: False` as the trigger for our Job
name: iris-train-job
- name: iris-train-job
  type: job
    type: manual
    run: False

Re-Running a Job manually

We can find the link to the Job Details from the Truefoundry dashboard on Deployments page.


Jobs list

We can re-trigger a job manually by Clicking the Trigger Job button from the Job Details UI.


Job details

See Also