module Runs
class MlFoundryRun
MlFoundryRun.
property dashboard_link
Get Mlfoundry dashboard link for a run
property fqn
Get fqn for the current run
property ml_repo
Get ml_repo name of which the current run is part of
property run_id
Get run_id for the current run
property run_name
Get run_name for the current run
property status
Get status for the current run
function delete
This function permanently delete the run
Example:
function end
End a run.
This function marks the run as FINISHED.
Example:
function get_metrics
Get metrics logged for the current run grouped by metric name.
Args:
metric_names(Optional[Iterable[str]], optional): A list of metric names For which the logged metrics will be fetched. If not passed, then all metrics logged under therunis returned.
Dict[str, List[Metric]]: A dictionary containing metric name to list of metrics map.
function get_params
Get all the params logged for the current run.
Returns:
Dict[str, str]: A dictionary containing the parameters. The keys in the dictionary are parameter names and the values are corresponding parameter values.
function get_tags
Returns all the tags set for the current run.
Returns:
Dict[str, str]: A dictionary containing tags. The keys in the dictionary are tag names and the values are corresponding tag values.
function list_artifact_versions
Get all the version of a artifact from a particular run to download contents or load them in memory
Args:
artifact_type: Type of the artifact you want
Iterator[ArtifactVersion]: An iterator that yields non deleted artifact-versions of a artifact under a given run sorted reverse by the version number
function list_model_versions
Get all the version of a models from a particular run to download contents or load them in memory
Returns:
Iterator[ModelVersion]: An iterator that yields non deleted model-versions under a given run sorted reverse by the version number
function log_artifact
Logs an artifact for the current ML Repo.
An artifact is a list of local files and directories. This function packs the mentioned files and directories in artifact_paths and uploads them to remote storage linked to the experiment
Args:
name(str): Name of the Artifact. If an artifact with this name already exists under the current ML Repo, the logged artifact will be added as a new version under thatname. If no artifact exist with the givenname, the given artifact will be logged as version 1.artifact_paths(List[truefoundry.ml.ArtifactPath], optional): A list of pairs of (source path, destination path) to add files and folders to the artifact version contents. The first member of the pair should be a file or directory path and the second member should be the path inside the artifact contents to upload to.
Directory Structure
description(Optional[str], optional): arbitrary text upto 1024 characters to store as description. This field can be updated at any time after logging. Defaults toNonemetadata(Optional[Dict[str, Any]], optional): arbitrary json serializable dictionary to store metadata. For example, you can use this to store metrics, params, notes. This field can be updated at any time after logging. Defaults toNonestep(int): step/iteration at which the vesion is being logged, defaults to 0.
truefoundry.ml.ArtifactVersion: an instance ofArtifactVersionthat can be used to download the files, or update attributes like description, metadata.
function log_images
Log images under the current run at the given step.
Use this function to log images for a run. PIL package is needed to log images. To install the PIL package, run pip install pillow.
Args:
images(Dict[str, “truefoundry.ml.Image”]): A map of string image key to instance oftruefoundry.ml.Imageclass. The image key should only contain alphanumeric, hyphens(-) or underscores(_). For a single key and step pair, we can log only one image.step(int, optional): Training step/iteration for which theimagesshould be logged. Default is0.
function log_metrics
Log metrics for the current run.
A metric is defined by a metric name (such as “training-loss”) and a floating point or integral value (such as 1.2). A metric is associated with a step which is the training iteration at which the metric was calculated.
Args:
metric_dict(Dict[str, Union[int, float]]): A metric name to metric value map. metric value should be eitherfloatorint. This should be a non-empty dictionary.step(int, optional): Training step/iteration at which the metrics present inmetric_dictwere calculated. If not passed,0is set as thestep.
function log_model
Serialize and log a versioned model under the current ML Repo. Each logged model generates a new version associated with the given name and linked to the current run. Multiple versions of the model can be logged as separate versions under the same name.
Args:
name(str): Name of the model. If a model with this name already exists under the current ML Repo, the logged model will be added as a new version under thatname. If no models exist with the givenname, the given model will be logged as version 1.model_file_or_folder(str): Path to either a single file or a folder containing model files. This folder is usually created using serialization methods of libraries or frameworks e.g.joblib.dump,model.save_pretrained(...),torch.save(...),model.save(...)framework(Union[enums.ModelFramework, str]): Model Framework. Ex:- pytorch, sklearn, tensorflow etc. The full list of supported frameworks can be found intruefoundry.ml.enums.ModelFramework. Can also beNonewhenmodelisNone.description(Optional[str], optional): arbitrary text upto 1024 characters to store as description. This field can be updated at any time after logging. Defaults toNonemetadata(Optional[Dict[str, Any]], optional): arbitrary json serializable dictionary to store metadata. For example, you can use this to store metrics, params, notes. This field can be updated at any time after logging. Defaults toNonestep(int): step/iteration at which the model is being logged, defaults to 0.
truefoundry.ml.ModelVersion: an instance ofModelVersionthat can be used to download the files, load the model, or update attributes like description, metadata, schema.
- Sklearn
- Huggingface Transformers
function log_params
Logs parameters for the run.
Parameters or Hyperparameters can be thought of as configurations for a run. For example, the type of kernel used in a SVM model is a parameter. A Parameter is defined by a name and a string value. Parameters are also immutable, we cannot overwrite parameter value for a parameter name.
Args:
param_dict(ParamsType): A parameter name to parameter value map. Parameter values are converted tostr.flatten_params(bool): Flatten hierarchical dict, e.g.{'a': {'b': 'c'}} -> {'a.b': 'c'}. All the keys will be converted tostr. Defaults to False
- Logging parameters using a
dict.
- Logging parameters using
argparseNamespace object
function log_plots
Log custom plots under the current run at the given step.
Use this function to log custom matplotlib, plotly plots.
Args:plots (Dict[str, “matplotlib.pyplot”, “matplotlib.figure.Figure”, “plotly.graph_objects.Figure”, Plot][str, “matplotlib.pyplot”, “matplotlib.figure.Figure”, “plotly.graph_objects.Figure”, Plot]): A map of string plot key to the plot or figure object. The plot key should only contain alphanumeric, hyphens(-) or underscores(_). For a single key and step pair, we can log only one image.
step(int, optional): Training step/iteration for which theplotsshould be logged. Default is0.
- Logging a plotly figure
- Logging a matplotlib plt or figure
function set_tags
Set tags for the current run.
Tags are “labels” for a run. A tag is represented by a tag name and value.
Args:
tags(Dict[str, str]): A tag name to value map. Tag name cannot start withmlf.,mlf.prefix is reserved for truefoundry. Tag values will be converted tostr.