Deploy a Job with Parameters


What you'll learn

  • Create a training code with the option to pass hyperparameters as arguments
  • Deploying our training code as a job and adding the arguments as param

This is a guide to deploy training code as a job via servicefoundry and pass different hyperparameters as arguments after deployment.

After you complete the guide, you will have a successful deployed job. Your jobs deployment dashboard will look like this:

You will also have the ability to change hyperparameters via giving value for different parameters in the UI.

Project structure

To complete this guide, you are going to create the following files:

  • : contains our training code
  • requirements.txt : contains our dependencies
  • contains our deployment code / deployment configuration. (Depending on whether you choose to use our python SDK or create a YAML file)

Your final file structure is going to look like this:

β”œβ”€β”€ / deploy.yaml
└── requirements.txt

As you can see, all the following files are created in the same folder/directory

Step 1: Implement the training code

The first step is to create a job that trains a scikit learn model on iris dataset

We start with a containing our training code and requirements.txt with our dependencies.

└── requirements.txt

import argparse
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import classification_report

parser = argparse.ArgumentParser()
parser.add_argument("--kernel", type=str, required=True, help="enter what kernel SVC should use")
parser.add_argument("--C", type=float, required=True, help="enter what C value SVC should use")
args = parser.parse_args()

X, y = load_iris(as_frame=True, return_X_y=True)
X = X.rename(columns={
        "sepal length (cm)": "sepal_length",
        "sepal width (cm)": "sepal_width",
        "petal length (cm)": "petal_length",
        "petal width (cm)": "petal_width",

# NOTE:- You can pass these configurations via command line
# arguments, config file, environment variables.
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42, stratify=y
clf =  SVC(C=args.C, kernel=args.kernel), y_train)
print(classification_report(y_true=y_test, y_pred=clf.predict(X_test)))

Click on the Open Recipe below to understand the



# for deploying our job deployments

Step 2: Deploying as job

You can deploy services on TrueFoundry programmatically either using our Python SDK, or via a YAML file.

So now you can choose between either creating a file, which will use our Python SDK.
Or you can choose to create a deploy.yaml configuration file and then use the servicefoundry deploy command

Via python SDK

File Structure

└── requirements.txt


In the code below, ensure to replace "YOUR_WORKSPACE_FQN" in the last line with your WORKSPACE FQN

import argparse
import os
import logging
from servicefoundry import Job, Build, PythonBuild, Resources, Param


parser = argparse.ArgumentParser()
parser.add_argument("--workspace_fqn", required=True, type=str)
args = parser.parse_args()

# First we define how to build our code into a Docker image
image = Build(
        command="python --C {{c_val}} --kernel {{kernel}}",
job = Job(
          Param(name="c_val", description="enter what c value SVC should use", default=1),
          Param(name="kernel", description="enter what kernel SVC should use", default="poly"), 

Follow the recipe below to understand the file :

To deploy the job using Python API use:

python --workspace_fqn <YOUR WORKSPACE FQN HERE>

Via YAML file

File Structure

β”œβ”€β”€ deploy.yaml
└── requirements.txt


name: iris-train-args-job
type: job
  type: build
    type: local
    type: tfy-python-buildpack
    command: python --C {{c_val}} --kernel {{kernel}}
    requirements_path: requirements.txt
  - name: c_val
    description: enter what c value SVC should use
    default: 1
  - name: kernel
    description: enter what kernel SVC should use
    default: poly

Follow the recipe below to understand the deploy.yaml file :-

To deploy the job using Python API use:

servicefoundry deploy --workspace-fqn YOUR_WORKSPACE_FQN --file deploy.yaml

Run the above command from the same directory containing the and requirements.txt files.


.tfyignore files

If there are any files you don't want to be copied to the workspace, like a data file, or any redundant files. You can use .tfyignore files in that case.

Running a Job with different hyperparameters

Via Terminal

To trigger the following job via terminal you can run the following command:

servicefoundry trigger job --application-fqn YOUR_APPLICATION_FQN --params '{"c_val":"2", "kernel":"poly"}'

Via Python SDK

To trigger the following job via Python SDK you can run the following code:

from servicefoundry import trigger_job

  params={"kernel":"lin", "n_quantiles":"420"}

Via User Interface

After you run the command given above, you will get a link at the end of the output. The link will take you to your application's dashboard.

Once the build is complete you can trigger your job by click the button highlighted in red :

Job details

Job details

Now the following tab would open up.

Here you can set the value for the hyper parameters and they will be reflected in the final paramtrized command.

Once you set the values click Trigger Job, this should take you to the Runs tab.

To see the results of your training, click on the logs button and you will be able to see the result of your training.

Creating Parametrised Job from UI.

When creating the paramterised job from UI you will see the following setting

Once you toggle this, you can enter your Parameters:

You also need to set up the Entrypoint Override.

See Also