πŸ‘

What you'll learn

  • How to Deploy your Applications as a Service via Truefoundry User Interface.
  • The basic configurations you can have for your Service Deployments.

After you complete the guide, you will have a successfully deployed FastAPI Service. Your deployed FastAPI Service will look like this:

Let's get started with the steps for your first deployment:

Step 1: Logging into Your Truefoundry Account

To begin, log in to your Truefoundry account using your credentials. Upon successful login, you will be directed to the Deployments Dashboard for services.

Step 2: Initiating Deployment via UI

  1. Navigate to the "Test out Service Deployment" section on the Deployments Dashboard. If there are existing deployments, you may need to scroll down slightly to find it.
  2. Click on the "Deploy using UI" button to initiate the deployment process.

Step 3: Workspace Selection

A modal will appear with a "Where would you like to deploy?" option. Click on the "Search Workspace" bar, and select the desired workspace for your deployment.

πŸ“˜

Note:

If you don't have any workspace right now, you can create a new workspace by clicking the Create New Workspace link and following along this Documentation or contact your cluster admin in case of some issues

Once selected click on the "Next Step" button to continue with the deployment.

Step 4: Configure Deployment

Now, you'll come across a deployment form featuring a range of configuration choices. These choices play a significant role in how your service gets deployed.

To make your first deployment smoother (thanks to the "Deploy using UI" button), we've already taken care of the basic settings. Nevertheless, it's important to know that you might want to fine-tune these options to suit your needs.

In the next section we are giving a very brief overview of what the options mean:

Explanation of Deployment Options

  • Name:
    The Name field allows you to assign a unique identifier to the service within your workspace.
  • Build Source (Deploy a Docker Image or Build and Deploy from Source Code):
    The Build Source option lets you specify whether you want to deploy a pre-built Docker image or build and deploy from your source code.
  • Build Specification (Build Using Dockerfile or Using Buildpack):
    The Build Specification helps determine the method you want to use for building your service. You can choose to build using a Dockerfile, providing custom configuration for your environment, or you can opt for a Buildpack, which automates the build process based on your application's requirements.
  • Ports:
    The Ports field enables you to define the communication channels that your service will utilize. Specify the port numbers that your service will listen on to receive incoming requests.
  • Environment Variables:
    Environment Variables allow you to configure runtime settings for your service. You can provide key-value pairs that influence how your service behaves when it's running, such as database connection strings, API keys, or other configuration parameters.
  • Resources:
    • CPU:
      The CPU resource allocation determines how much processing power your service can utilize. You can specify the desired amount of CPU capacity that your service can consume to perform its tasks efficiently.
    • Memory:
      Memory allocation dictates the amount of RAM your service can use. Define the memory limit to ensure that your service has enough memory available for its operations without causing performance issues.
    • Storage:
      Storage allocation refers to the amount of disk space your service can access. Specify the storage limit to ensure that your service has sufficient space to store files, logs, and other data it generates during its operation.

Step 5: Submit the form

Now that all the deployment options are filled, you can proceed by clicking the "Create" button. This will initiate your deployment process. After clicking "Create," your Service Dashboard will resemble the following:

While your deployment is in progress, you can hover over the spinner icon to check the status of the deployment. After a brief moment, the deployment should become active, and your Service Dashboard will transform to appear as follows:

Congratulations! You have successfully deployed your FastAPI Service.

Interacting with your Service.

Once your service has been deployed successfully, you can begin making requests to it

  1. Click on your specific service within the dashboard. This will open the dedicated dashboard for your service.
    In the dashboard, you'll find the endpoint URL for your service. This endpoint is where your deployed service can be accessed, allowing you to interact with your deployed machine learning model.
  2. Copy this endpoint URL; you'll need it to make requests.
  1. You can now use this endpoint URL to make predictions. For example, if you want to predict classes based on the following data:
sepal_lengthsepal_widthpetal_lengthpetal_width
7.03.24.71.4
  1. Here's a Python code snippet to send a request with the above data using the endpoint URL:
import json  
from urllib.parse import urljoin

import requests

# Replace this with the value of your endpoint URL

ENDPOINT_URL = "\<YOUR_ENDPOINT_URL>"  # e.g., <https://your-service-endpoint.com/>

response = requests.post(  
    urljoin(ENDPOINT_URL, 'predict'),  
    json={  
        "sepal_length": 7.0,  
        "sepal_width": 3.2,  
        "petal_length": 4.7,  
        "petal_width": 1.4,  
    }  
)

result = response.json()  
print("Predicted Classes:", result["prediction"])

Running this code will provide you with the predicted classes.

Predicted Classes: 0