OWASP Top 10 Security Risks
OWASP Top 10 Security Risks are a list of the most critical security risks to web applications. They are a set of guidelines that help developers and security professionals identify and mitigate security risks in web applications. You can read more about OWASP Top 10 Security Risks here.
OWASP Top 10 Security Risks
LLM01:2025 Prompt Injection
Risk Involved: A Prompt Injection Vulnerability occurs when user prompts alter the intended behavior of the system, potentially leading to unauthorized actions or data exposure.
Ways of Mitigation: Implement strict input validation and sanitization to ensure that user inputs do not interfere with system prompts. Regularly update and patch systems to address known vulnerabilities.
How TrueFoundry helps: Truefoundry AI Gateway provides a way to intercept all request and check them against prompt rejection with guardrails. Read more about Prompt Injection to learn more.
LLM02:2025 Sensitive Information Disclosure
Risk Involved: Sensitive information can affect both the LLM and its application, leading to potential data breaches and loss of confidentiality.
Ways of Mitigation: Use encryption and access controls to protect sensitive data. Conduct regular audits and monitoring to detect unauthorized access or data leaks.
How TrueFoundry helps: You can use PII detection and masking to detect and mask sensitive information in the input. Read more about PII Detection and Masking to learn more.
LLM03:2025 Supply Chain
Risk Involved: LLM supply chains are susceptible to various vulnerabilities, which can compromise the integrity and security of the system.
Ways of Mitigation: Implement supply chain security measures, such as vendor assessments and secure software development practices, to mitigate risks.
How TrueFoundry helps: Truefoundry’s Custom Policies and Admin Controls can be used to allow or block certain models/libraries. Custom image and artifact scanning using custom workflows can be implemented using Truefoundry.
LLM04:2025 Data and Model Poisoning
Risk Involved: Data poisoning occurs when pre-training, fine-tuning, or embedding data is manipulated to alter the behavior of the model.
Ways of Mitigation: Use robust data validation and anomaly detection techniques to identify and prevent data poisoning attacks.
How TrueFoundry helps: Data and Model Poisioning ONLY applies to fine-tuning and training. You can restrict the developers to use only certified “Foundation Models” provided by OpenAI, Anthropic, Google, etc. If you are Finetuning/pre-training/post-training your model. We recommend you to test your models against certain evaluation frameworks and red teaming.
LLM05:2025 Improper Output Handling
Risk Involved: Improper Output Handling refers specifically to insufficient validation, sanitization, and encoding of output data, leading to potential security vulnerabilities.
Ways of Mitigation: Implement output encoding and validation to ensure that data is properly handled and does not introduce security risks.
How TrueFoundry helps: You can use Guardrails to validate the output and sanitize the output. You can sanitize the output by ensuring PII detection and masking, content moderation and also use our custom guardrails to sanitize the output.
LLM06:2025 Excessive Agency
Risk Involved: An LLM-based system is often granted a degree of agency, which can lead to unintended actions or decisions if not properly controlled.
Ways of Mitigation: Implement strict access controls and monitoring to ensure that the system’s actions are aligned with intended policies and procedures.
How TrueFoundry helps: AI Agents interact with the world with the help of tools and MCP servers. With TrueFoundry you can allow or block certain tools and MCP servers for certain users/applications. Read more about our MCP Gateway here.
LLM07:2025 System Prompt Leakage
Risk Involved: The system prompt leakage vulnerability in LLMs refers to the unintended exposure of system prompts, which can lead to unauthorized access or manipulation.
Ways of Mitigation: Use secure communication channels and encryption to protect system prompts from unauthorized access or exposure.
How TrueFoundry helps: TrueFoundry can handle this either with prompt-injection or with custom guardrails
LLM08:2025 Vector and Embedding Weaknesses
Risk Involved: Vectors and embeddings vulnerabilities present significant security risks in systems relying on LLMs, potentially leading to data manipulation or unauthorized access.
Ways of Mitigation: Implement robust security measures and regular audits to identify and address vulnerabilities in vectors and embeddings.
How TrueFoundry helps: This logic is better managed by the application layer which will be running for your RAG application. You can deploy vector databases on TrueFoundry and also deploy your own RAG application, but the un-authorized access needs to be managed by the application layer.
LLM09:2025 Misinformation
Risk Involved: Misinformation from LLMs poses a core vulnerability for applications relying on accurate and reliable data, potentially leading to incorrect decisions or actions.
Ways of Mitigation: Implement fact-checking and validation processes to ensure the accuracy and reliability of information generated by LLMs.
How TrueFoundry helps: You can use Guardrails to validate the output and sanitize the output. You can apply gibberish detection and also implement a custom fact checker guardrail.
LLM10:2025 Unbounded Consumption
Risk Involved: Unbounded Consumption refers to the process where a Large Language Model consumes resources without limits, potentially leading to system overload or failure.
Ways of Mitigation: Implement resource management and monitoring to ensure that LLMs operate within defined limits and do not consume excessive resources.
How TrueFoundry helps: TrueFoundry gateway provides robust access controls for users and applications. You can also set rate limits and also set budgets for users, teams, applications, models with our flexible configurations.