Deploy a Job with Manual Trigger
What you'll learn
- Deploying our training code as a job via servicefoundry
- Configure the job to not run immediately once it is deployed.
This is a guide to deploy training code as a job via servicefoundry and configure it to not run immediately once it is deployed.
After you complete the guide, you will have a successful deployed job. Your jobs deployment dashboard will look like this:
Project structure
To complete this guide, you are going to create the following files:
train.py
: contains our training coderequirements.txt
: contains our dependenciesdeploy.py
/deploy.yaml
: contains our deployment code / deployment configuration. (Depending on whether you choose to use our python SDK or create a YAML file)
Your final file structure is going to look like this:
.
βββ train.py
βββ deploy.py / deploy.yaml
βββ requirements.txt
As you can see, all the following files are created in the same folder/directory
Step 1: Implement the training code
The first step is to create a job that trains a scikit learn model on iris dataset
We start with a train.py
containing our training code and requirements.txt
with our dependencies.
.
βββ train.py
βββ requirements.txt
train.py
train.py
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
X, y = load_iris(as_frame=True, return_X_y=True)
X = X.rename(columns={
"sepal length (cm)": "sepal_length",
"sepal width (cm)": "sepal_width",
"petal length (cm)": "petal_length",
"petal width (cm)": "petal_width",
})
# NOTE:- You can pass these configurations via command line
# arguments, config file, environment variables.
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42, stratify=y
)
# Initialize the model
clf = LogisticRegression(solver="liblinear")
# Fit the model
clf.fit(X_train, y_train)
preds = clf.predict(X_test)
print(classification_report(y_true=y_test, y_pred=preds))
Click on the Open Recipe below to understand the train.py
:
requirements.txt
requirements.txt
pandas
numpy
scikit-learn
# for deploying our job deployments
servicefoundry
Step 2: Deploying as job
You can deploy services on TrueFoundry programmatically either using our Python SDK, or via a YAML file.
So now you can choose between either creating a deploy.py file, which will use our Python SDK.
Or you can choose to create a deploy.yaml configuration file and then use the servicefoundry deploy
command
Via python SDK
File Structure
.
βββ train.py
βββ deploy.py
βββ requirements.txt
deploy.py
deploy.py
In the code below, ensure to replace "YOUR_WORKSPACE_FQN" in the last line with your WORKSPACE FQN
import argparse
import logging
from servicefoundry import Build, Job, PythonBuild, Manual
logging.basicConfig(level=logging.INFO)
parser = argparse.ArgumentParser()
parser.add_argument("--workspace_fqn", required=True, type=str)
args = parser.parse_args()
# First we define how to build our code into a Docker image
image = Build(
build_spec=PythonBuild(
command="python train.py",
requirements_path="requirements.txt",
)
)
job = Job(
name="iris-train-cron-job",
image=image,
trigger=Manual()
)
job.deploy(workspace_fqn=args.workspace_fqn)
Follow the recipe below to understand the deploy.py file :
To deploy the job using Python API use:
python deploy.py --workspace_fqn <YOUR WORKSPACE FQN HERE>
Via YAML file
File Structure
.
βββ train.py
βββ deploy.yaml
βββ requirements.txt
deploy.yaml
deploy.yaml
name: iris-train-job
type: job
image:
type: build
build_source:
type: local
build_spec:
type: tfy-python-buildpack
command: python train.py
requirements_path: requirements.txt
trigger:
type: manual
Follow the recipe below to understand the deploy.yaml file :-
To deploy the job using Python API use:
servicefoundry deploy --workspace-fqn YOUR_WORKSPACE_FQN --file deploy.yaml
Run the above command from the same directory containing the
train.py
andrequirements.txt
files.
.tfyignore files
If there are any files you don't want to be copied to the workspace, like a data file, or any redundant files. You can use .tfyignore files in that case.
End result
By default a job will run immediately after deployment. Here we configured the Job to not run immediately once it is deployed.
So now we need to manually trigger it.
Re-Running a Job manually
We can find the link to the Job Details from the Truefoundry dashboard on Deployments page.

Jobs list
We can re-trigger a job manually by Clicking the Trigger Job button from the Job Details UI.

Job details
We can now visit our Applications page to check Build status, Build Logs, Runs History and monitor progress of runs. See Monitoring and Debugging guide for more details.
See Also
Updated 3 months ago