TrueFoundry is a platform that helps you train, deploy and monitor models very easily.
It is an integrated solution so that data scientists don't need to juggle between different tools and can have high levels of flexibility and independence to deploy their models with best practices and policies.
Currently, there is a lot of interdependence between the data science, data engineering, and DevOps teams, leading to delays in implementation and development. We experienced the problem first-hand and have been working on inverting the flow to make production times for ML models much faster.
TrueFoundry provides a single platform to log and track your experiments and models, deploy them in whatever configuration you want and then monitor their performance. The key components are:
Track your experiments and models: TrueFoundry's tracking library called
mlfoundryallows logging data, metrics, plots, images, artifacts, and models for machine learning experiments, all of which can be visualized from an intuitive dashboard with RBAC support.
Deploy Your models: TrueFoundry's deployment library is called
servicefoundryand makes it super easy to deploy models on managed Kubernetes clusters. Since we build on top of Kubernetes, it provides complete flexibility to add any integrations that come with Kubernetes.
Monitor your models: With TrueFoundry, you can generate dashboards with various model monitoring metrics automatically, like feature value distributions, model performance statistics, drifts, etc., just by adding a few log lines into your code. Data scientists can also add custom metrics and create alerts on their models.
Run end-to-end pipelines (In the roadmap): We are also working on providing support for running end-to-end pipelines using KubeFlow, Metaflow, and a few other frameworks.
TrueFoundry is hosted as a public cloud for data scientists to try it out and host their models. The public cloud provides a multi-tenant system for experiment tracking, model deployment, and monitoring. You get a free tier to run 1 model end to end with limited resources. However, you can purchase additional resources to run more heavy or multiple models.
Our public cloud is still in alpha - so we don't recommend it for any production use cases yet. However, feel free to try out your hobby or test projects for faster iteration times and give us feedback. This will help us in making the platform much better.
You can try TrueFoundry for your own personal use case here.
You can also deploy TrueFoundry on your own cloud to provide the same developer experience to all ML teams. To host TrueFoundry on your own cloud, please get in touch with us. As of now, we work well with the AWS ecosystem , but we plan to extend support to other cloud providers.
Updated 11 days ago