Trojans in Artificial Intelligence (TrojAI)

The U.S. Army Research Office, in partnership with the Intelligence Advanced Research Projects Activity, seeks research and development of technology and techniques for detection of Trojans in Artificial Intelligence. TrojAI is envisioned to be a two-year effort with multiple awardees coming together as a group of performers who will work together to achieve the common program goals set forth in the Broad Agency Announcement.

Current State

Using current machine learning methods, an Artificial Intelligence is trained on data, learns relationships in that data, and then is deployed to the world to operate on new data. For example, an AI can be trained on images of traffic signs, learn what stop signs and speed limit signs look like, and then be deployed as part of an autonomous car. The problem is that an adversary that can disrupt the training pipeline can insert Trojan behaviors into the AI. For example, an AI learning to distinguish traffic signs can be given potentially just a few additional examples of stop signs with yellow squares on them, each labeled “speed limit sign”. If the AI were deployed in a self-driving car, an adversary could cause the car to run through the stop sign just by putting a sticky note on it, since the AI would incorrectly see it as a speed limit sign. The goal of the TrojAI Program is to combat such Trojan attacks by inspecting AIs for Trojans.

Defending Against Trojan Attacks

Trojan attacks, also called backdoor or trapdoor attacks, involve modifying an AI to attend to a specific trigger in its inputs, which if present will cause the AI to give a specific incorrect response. In the traffic sign case, the trigger is a sticky note. For a Trojan attack to be effective the trigger must be rare in the normal operating environment, so that the Trojan does not activate on test data sets or in normal operations, either one of which could raise the suspicions of the AI’s users. Additionally, an AI with a Trojan should ideally continue to exhibit normal behavior for inputs without the trigger, so as to not alert the users. Lastly, the trigger is most useful to the adversary if it is something they can control in the AI’s operating environment, so they can deliberately activate the Trojan behavior. Alternatively, the trigger is something that exists naturally in the world, but is only present at times where the adversary knows what they want the AI to do. Trojan attacks’ specificity differentiates them from the more general category of “data poisoning attacks”, whereby an adversary manipulates an AI’s training data to make it just generally ineffective.

In the initial example the Trojan was inserted by manipulating both the training data and its labels. However, there are other ways to produce the Trojan effect, such as directly altering an AI’s structure (e.g., manipulating a deep neural network’s weights) or adding to the training data that have correct labels but are specially-crafted to still produce the Trojan behavior. Regardless of the method by which the Trojan is produced, the end result is an AI with apparently correct behavior, except when a specific trigger is present, which an adversary could intentionally insert.

Obvious defenses against Trojan attacks include securing the training data (to protect data from manipulation), cleaning the training data (to make sure the training data is accurate), and protecting the integrity of a trained model (prevent further malicious manipulation of a trained clean model). Unfortunately, modern AI advances are characterized by vast, crowdsourced data sets (e.g., 109 data points) that are impractical to clean or monitor. Additionally, many bespoke AIs are created by transfer learning: take an existing, public AI published online and modify it a little for the new use case. Trojans can persist in an AI even after such transfer learning. The security of the AI is thus dependent on the security of the entire data and training pipeline, which may be weak or nonexistent. Furthermore, the user may not be the one doing the training. Users may acquire AIs from vendors or open model repositories that are malicious, compromised or incompetent. Acquiring an AI from elsewhere brings all of the problems with the data pipeline, as well as the possibility of the AI being modified directly while stored at a vendor or in transit to the user. Given the diffuse and unmanageable supply chain security, the focus for the TrojAI Program is on the operational use case where the complete AI is already in the would-be users’ hands: detect if an AI has a Trojan, to determine if it can be safely deployed.

Contracting Office Address

Office of the Director of National Intelligence
Intelligence Advanced Research Projects Activity
Washington, DC 20511
United States

BAA Point of Contact, TrojAI

Kevin Bassler
Contracting Specialist
kevin.j.bassler.civ@mail.mil

IARPA Program Manager

Jeff Alstott
Program Manager
dni-iarpa-baa-19-03@iarpa.gov

Proposers' Day Video

Solicitation Status: OPEN

Broad Agency Announcement: W911NF-19-S-0012

Draft Broad Agency Announcement

Proposers' Day Announcement
Proposers' Day Date: February 26, 2019
BAA Release Date: May 2, 2019
Concept Papers Due: May 31, 2019
Final Proposals (Invitation Only) Due: July 25, 2019

Additional Information

Program Description

Proposers' Day Briefings

TrojAI Proposers' Day Briefing