Usually, when you deploy an application, you get to choose the size of the ephemeral storage in the deployment form. This ensures that you get disk space according to the size that you asked for - but since the storage is ephemeral, all the data is lost once the pod restarts. In some cases, you might want to store the data across restarts and the different replicas of the service. Depending on the problem, mounting Volumes or DataMount can be effective solutions.
- Volumes provide persistent storage that can be attached to pods, allowing them to access data that persists across restarts. This is particularly useful for storing large datasets, models, or user data that needs to be maintained across multiple replicas of the service.
- DataMounts, on the other hand, offer a lightweight alternative for storing smaller amounts of data, such as configuration files. They are directly embedded in the pod's spec and do not require the creation or management of separate volumes.
Volumes can be useful for the following use cases:
- Model Caching: We might need to cache the model after downloading it once so that we don't incur extra time to download the model across restarts. This will also prevent the download of the model repeatedly by multiple replicas of the service.
- Storing user data: We might need to store some data generated from the service in a disk that needs to be persistent across restarts. In general, while it's recommended to use some sort of managed blob storage like S3, GCS, or Azure Container service for such needs, volumes can also be useful in some scenarios.
To use a persistent volume, we will first need to create one and then attach it to our deployments. You can learn how to create volumes using the Creating a Volume guide
Here, we'll explore two different methods for attaching volumes to a deployment:
- Through the User Interface (UI)
- Using the Python SDK
You can add the following in your
from servicefoundry import Build, Service, DockerFileBuild, VolumeMount service = Service( name="my-service", image=Build(build_spec=DockerFileBuild()), ports=[ Port( host="your_host", port=8501 ) ] + mounts=[ + VolumeMount( + mount_path="/model", #or your desired path + volume_fqn="your-volume-fqn" + ) + ] ) service.deploy(workspace_fqn="YOUR_WORKSPACE_FQN")
Once you have attached a volume to your deployment, you can use it in your deployment like any other directory. For example, if you mounted the volume to
/model, you can access the files in the volume using the
/model directory in your deployment.
The following example shows a simple FastAPI service that uses a mounted volume to load a model file:
from fastapi import FastAPI import joblib app = FastAPI() # Load the model from the mounted volume # at the path `/model` using joblib with open("/model/model.joblib", "rb") as f: model = joblib.load(f) ....
While environment variables offer a convenient way to inject configuration information into applications, they have limitations. One such limitation arises when external systems require files as input for configuration data. In such scenarios, environment variables fall short. For example, consider a cloud-based service like GCP Cloud Spanner. To connect your application to Cloud Spanner, you need to provide the service with a JSON file containing your connection credentials,
For cases like this, DataMounts are a convenient way to store small amounts of text-based data directly within the pod's spec.
You can do this in two ways:
- StringDataMount: Here you directly pass the string data. This will become your file content.
- SecretMount: Here you pass the Secret FQN. And the content in the Secret will become your file content.
from servicefoundry import Build, Service, DockerFileBuild, Port, StringDataMount service = Service( name="my-service", image=Build(build_spec=DockerFileBuild()), ports=[ Port( host="your_host", port=8501 ) ], + mount = [ + StringDataMount(mount_path="/data/config.json", data='...' + ] ) service.deploy(workspace_fqn="YOUR_WORKSPACE_FQN")
Here instead of passing a string value, we will pass the FQN of the secret (of the form
tfy-secret://user:my-secret-group:my-secret). You can read about how to create the secrets here. Truefoundry will automatically fetch the value and inject it into the mount at runtime.
from servicefoundry import Build, Service, DockerFileBuild, Port, SecretMount service = Service( name="my-service", image=Build(build_spec=DockerFileBuild()), ports=[ Port( host="your_host", port=8501 ) ], + mount = [ + SecretMount(mount_path="/data/config.json", data="tfy-secret://user:my-secret-group:my-secret") + ] ) service.deploy(workspace_fqn="YOUR_WORKSPACE_FQN")
Updated about 16 hours ago