Menu
adversarial attacks

The Leading Robustness
& Security
Platform for
AI/ML Teams

Providing solutions that assess, measure and track the robustness of AI/ML models to improve the performance and security of models.
adversarial attacks

We help global changemakers build a better world by advancing Trustworthy AI

Protections now available for Computer Vision, NLP and Tabular Structured Data

Our Technology

We protect your AI in several ways.

Robustness Assessment

Accuracy metrics fail to predict performance on long-tail real-world edge cases, both naturally occurring and adversarial. Empirically score and track model robustness for a more complete picture of model performance.

Identify Failure Bias

Identify at-risk model classes that are the most likely to fail and to which classes such failures are most likely to fail towards, or as we like to call it, 'failure bias', allowing reduction and shaping risk away from key classes.

Adversarial Training

Identify noisy labels along with the most efficient adversarial samples that can be included in your training process to improve both accuracy and robustness metrics for better, more secure AI.

Data Poisoning

Supply chains, insider threats and data breaches put data at risk. Audit your training data with our second-generation approach to identify embedded Trojan attacks, both known and unknown.

AI Firewall

Protect deployed models from model evasion attacks both in batch and real-time.

Responsible AI

Robustness and Security is necessary to achieve Responsible AI and it starts with understanding robustness. Deploy models with confidence, protect your brand and the global pace of AI innovation.

Sectors at Risk

The truth is, if your industry sector uses AI then you are at some risk of adversarial attack. Such attacks are limited only by the creativity and resourcefulness of malicious actors. While we cannot predict all possible attack vectors, our team actively monitors the threat landscape for emerging risks and are committed to making it significantly more difficult for attackers to succeed.

Defence & Security

Even with human-in-the-loop systems, AI can be fooled into highlighting incorrect information

Even with human-in-the-loop systems, AI can be fooled into highlighting incorrect information

Robotics

New attack vectors are emerging as AI is added to the industrial internet of things (IIOT)

New attack vectors are emerging as AI is added to the industrial internet of things (IIOT)

Autonomous Vehicles

Embedded trojan attacks can be invoked on demand to confuse self-driving cars and threatening public safety

Embedded trojan attacks can be invoked on demand to confuse self-driving cars and threatening public safety

AgTech

Adding AI increases the attack surface for agricultural economic espionage, which has already been highlighted as "a growing threat"

Adding AI increases the attack surface for agricultural economic espionage, which has already been highlighted as "a growing threat"

MedTech

Imperceivable noise can be engineered and added to force misclassifications

Imperceivable noise can be engineered and added to force misclassifications

InsurTech

Making systems smarter and more efficient can open new doors for organized crime

Making systems smarter and more efficient can open new doors for organized crime

Our Ecosystem

TrojAI is proud to be supported by the following organizations.

emedded trojan attacks SMDLC adversarial attacks ai security ai cybersecurity robustness and security model evasion attacks data poisoning attacks model evasion data poisoning adversarial ml deep learning

Contact Us

Interested in how you can achieve Trustworthy AI? Reach us below.

Send Us A Message

Sending...
Something went wrong. Please try again.
Your message was sent, thank you!

Contact Info

Find Us At

TrojAI Inc.
14 King Street, Suite 102
Saint John, NB
E2L1G2

Email Us At

sales@troj.ai
support@troj.ai
investors@troj.ai

Call Us At

Phone: (+506) 333-7207
Toll Free: (+888) 4-TROJAI