All posts

Driving Secure AI Practices in the AI Supply Chain

Christian Falco
Partnerships
Table of Contents

Last week at RSA, I had the opportunity to present on security for AI at the JFrog booth (and gave away some sweet TrojAI hats in the process). I discussed how AI models are quickly turning into critical assets that are inheriting new kinds of risk – and requiring new protections. 

At TrojAI, we focus on securing model behavior of AI applications and agents through automated red teaming and GenAI runtime defense. But we know that as more attention is put on the AI supply chain, different security needs emerge, like uncovering visibility into ML workloads and scanning models for malicious files and formats, while also providing proper security and compliance evidence of properly vetting models.

What is the AI software supply chain?

The AI software supply chain includes all the components used to build, deploy, and operate AI systems. It is a complex ecosystem of data; models, frameworks, and libraries; open source and proprietary tools and platforms; cloud infrastructure and services; and runtime environments. Think of it as your traditional software supply chain but for AI models, applications, and agents. Same risk categories, entirely new kinds of complexity.

And here’s the thing: that complexity is creating blind spots – blind spots that threat actors are already exploiting. Each component in the supply chain represents a potential point of failure. Organizations need visibility into all these components to prevent tampering, bias, or misuse throughout the AI lifecycle.

Why does the AI software supply chain need to be secured?

Malicious actors are already exploiting gaps in the AI supply chain. The number of components used in developing, testing, and deploying these systems is great, as is the number of potential exploits. Some exploits we’re seeing include:

  • Training data poisoning
  • Insecure open source or third party components
  • Compromised pre-trained models downloaded from public repositories
  • Compromised build pipelines
  • Inference-time prompt injection or model abuse
  • Cloud misconfigurations

These risks are not theoretical. They are actively being used to compromise model behavior, steal intellectual property, insert backdoors, and undermine trust in AI-powered systems. Unlike traditional software, AI systems can be subtly manipulated without any obvious changes to the code. This makes attacks more difficult to detect and more persistent once they take hold. A poisoned dataset or a trojaned model can produce biased, insecure, or even harmful outputs, often without triggering standard security alerts.

As AI becomes more integrated into critical systems, the potential impact of a compromised AI supply chain increases significantly. Securing AI supply chains is more than just code. It’s about safeguarding data integrity, ensuring model security and trustworthiness, and monitoring business outcomes that are influenced by AI. Organizations need to treat the AI software supply chain as a high-value asset through scanning, validation, traceability, and adversarial testing throughout development and deployment.

Extending AI security into the AI supply chain

So why was I in the JFrog booth? JFrog helps secure the AI security chain, and I was able to share some thoughts on how TrojAI and JFrog complement each other in securing AI development through capabilities like model visibility, model scanning, model red teaming, and runtime defense.

JFrog Artifactory plays a valuable role in today’s AI/ML workflows. It helps manage the lifecycle and compliance of the artifacts and dependencies that AI projects rely on, like Python packages, trained models, and datasets. JFrog also offers advanced AI model scanning capabilities with JFrog Xray, While the parameters (weights) in a model aren’t code, the model can contain, depend on, or even embed code, especially in training, inference, or unsafe serialization formats. Insecure serialization formats like pickle or joblib can be used to inject malicious code into production environments. 

So it’s an AI security problem, but it’s also an AI supply chain problem. This includes the models, training data, frameworks, deployment tools, and hosting environments that make up AI deployments. This expansive new infrastructure stretches the attack surface, and invites a multitude of new risks that need to be addressed through comprehensive security measures.

Securing AI models inside and out

AI security isn’t just about avoiding breaches. It’s about trust. Enterprises that take AI supply chain and AI development security seriously can move faster, ship smarter, and scale with confidence. Tools like TrojAI and JFrog make it possible to do that. We’re excited to continue working with JFrog on this evolving problem. Be on the lookout for more news around this.

How TrojAI can help

TrojAI’s mission is to enable the secure rollout of AI in the enterprise. Our comprehensive security platform for AI protects AI models, applications, and agents. Our best-in-class platform empowers enterprises to safeguard AI systems both at build time and run time. TrojAI Detect automatically red teams AI models, safeguarding model behavior and delivering remediation guidance at build time. TrojAI Defend is an AI application firewall that protects enterprises from threats in real time. 

By assessing the risk of AI model behavior during the model development lifecycle and protecting it at run time, TrojAI delivers comprehensive security for AI models, applications, and agents.

Want to learn more about how TrojAI secures the largest enterprises globally with a highly scalable, performant, and extensible solution?

Check us out at troj.ai now.