Model Deployment

Deploy a model as a Service without any Service code

Model Deployment allows you to take an already trained machine learning model and deploy it as a Service for real-time inference without writing any code for the Service itself!

🚧

Note

This feature is in the Beta stage - the input parameters and Python APIs can change.
We also have plans to add more model formats and frameworks, model stores and easy inference and testing tools.

We would love to hear from you if you have any feedback or run into any issues.

At the time of writing, we support the following frameworks:

  • scikit-learn (sklearn)
  • XGBoost
  • LightGBM
  • Tensorflow / Keras
  • PyTorch
  • Huggingface Pipelines for public models on Huggingface Hub

The models themselves can be provided using:

  • Truefoundry Model Registry
  • AWS S3 Buckets (public objects only)
  • HTTP links (publically downloadable)