Menu
adversarial attacks

Robustness and Security for AI

THE PATH TO TRUSTWORTHY AI

Computer Vision protections now available

We protect your AI from both naturally occurring and adversarial edge cases

Edge cases form a very long tail of situations that must be dealt with by AI models. Traditional accuracy metrics do not predict how a model will behave when deployed in the real-world. —Dr. James Stewart, CEO

Our Technology

We protect your AI in several ways.

Robustness Assessment

Accuracy metrics fail to predict performance on long-tail real-world edge cases, both naturally occurring and adversarial. Empirically score and track model robustness for a more complete picture of model performance.

Identify Failure Bias

Identify at-risk model classes that are the most likely to fail and to which classes such failures are most likely to fail towards, or as we like to call it, 'failure bias', allowing reduction and shaping risk away from key classes.

Adversarial Training

Identify noisy labels along with the most efficient adversarial samples that can be included in your training process to improve both accuracy and robustness metrics for better, more secure AI.

Data Poisoning

Supply chains, insider threats and data breaches put data at risk. Audit your training data with our second-generation approach to identify embedded Trojan attacks, both known and unknown. Coming Q4 2021.

AI Firewall

Protect deployed models from model evasion attacks both in batch and real-time. Coming Q1 2022.

Trustworthy AI

Robustness and Security is necessary to achieve Trustworthy AI and it starts with understanding robustness. Deploy models with confidence, protect your brand and the global pace of AI innovation.

Sectors at Risk

The truth is, if your industry sector uses AI then you are at some risk of adversarial attack. Such attacks are limited only by the creativity and resourcefulness of malicious actors. While we cannot predict all possible attack vectors, our team actively monitors the threat landscape for emerging risks and are committed to making it significantly more difficult for attackers to succeed.

Defence & Security

Even with human-in-the-loop systems, AI can be fooled into highlighting incorrect information

Even with human-in-the-loop systems, AI can be fooled into highlighting incorrect information

Robotics

New attack vectors are emerging as AI is added to the industrial internet of things (IIOT)

New attack vectors are emerging as AI is added to the industrial internet of things (IIOT)

Autonomous Vehicles

Embedded trojan attacks can be invoked on demand to confuse self-driving cars and threatening public safety

Embedded trojan attacks can be invoked on demand to confuse self-driving cars and threatening public safety

AgTech

Adding AI increases the attack surface for agricultural economic espionage, which has already been highlighted as "a growing threat"

Adding AI increases the attack surface for agricultural economic espionage, which has already been highlighted as "a growing threat"

MedTech

Imperceivable noise can be engineered and added to force misclassifications

Imperceivable noise can be engineered and added to force misclassifications

InsurTech

Making systems smarter and more efficient can open new doors for organized crime

Making systems smarter and more efficient can open new doors for organized crime

Our Ecosystem

TrojAI is proud to be supported by the following organizations.

Contact Us

Want to talk to the founders? Reach us below.

Send Us A Message

Sending...
Something went wrong. Please try again.
Your message was sent, thank you!

Contact Info

Find Us At

TrojAI Inc.
14 King Street, Suite 102
Saint John, NB
E2L1G2

Email Us At

sales@troj.ai
support@troj.ai
investors@troj.ai

Call Us At

Phone: (+506) 333-7207