Secure, Assured, Intelligent Learning Systems (SAILS)


The Intelligence Advanced Research Projects Activity (IARPA) will host a Proposers’ Day Conference for the SAILS and TrojAI programs on February 26, 2019 in anticipation of the release of two new solicitations. The Conference will provide information on the SAILS and TrojAI programs and the research problems the programs aim to address. Questions from potential proposers will also be answered. The Conference will be held from 9:00-4:30 EST in the Washington, DC metropolitan area. Additionally, the Conference will be remotely accessible via conference call; remote attendees’ questions can be emailed in during the Conference and addressed during a dedicated Q&A session.

This announcement serves as a pre-solicitation notice and is issued solely for information and planning purposes. The Proposers’ Day Conference does not constitute a formal solicitation for proposals or proposal abstracts. Conference attendance is voluntary and is not required to propose to future solicitations (if any) associated with these programs. IARPA will not provide reimbursement for any costs incurred to participate in this Proposers' Day.

Program Description and Goals


Across numerous sectors, a variety of institutions are adopting machine learning/artificial intelligence (ML/AI) technologies to streamline business processes and aid in decision making. These technologies are increasingly trained on proprietary and sensitive datasets that represent a competitive advantage for the particular entity. Recent work has demonstrated, however, that these systems are vulnerable to a variety of attack vectors including adversarial examples, training time attacks, and attacks against privacy. Each of these vectors represents a potential degradation in the usefulness of ML/AI technologies. In light of the use of sensitive training sets, however, attacks against privacy represent a particularly damaging threat.

In general, attacks against privacy are comprised of attacks that aim to reveal some form of information used in the training procedure of AI/ML models. Of particular interest are model inversion attacks and membership inference attacks. Model inversion attacks aim to reconstruct some representation of the data used to train a model, such as the average of an individual’s face used to train a facial identification model. Membership inference attacks aim to determine whether a given individual’s data was used in training the model, thus potentially de-anonymizing that user.

The SAILS program aims to develop methods for creating models robust to attacks against privacy. The goal is to provide a mechanism by which model creators can have confidence that their trained models will not inadvertently reveal sensitive information. Towards this end, SAILS will focus on a variety of problem domains, to include speech, text, and image, as well as black box and white box access models. Performers will be expected to develop techniques, including but not limited to new training procedures, new model architectures, or new pre-/post-processing procedures. Developed methods will be scored against state-of-the-art baselines within the chosen domain while using published model vulnerabilities.


Using current machine learning methods, an artificial intelligence (AI) is trained on data, learns relationships in that data, and then is deployed to the world to operate on new data. For example, an AI can be trained on images of traffic signs, learn what stop signs and speed limit signs look like, and then be deployed as part of an autonomous car. The problem is that an adversary that can disrupt the training pipeline can insert Trojan behaviors into the AI. For example, an AI learning to distinguish traffic signs can be given just a few additional examples of stop signs with yellow squares on them, each labeled "speed limit sign." If the AI were deployed in a self-driving car, an adversary could cause the AI to misidentify a stop sign as a speed limit sign just by putting a sticky note on it, potentially leading the car to run through the sign. Such Trojan attacks are a security threat to all users of AIs and those impacted by them.

The goal of the TrojAI program is to combat Trojan attacks by finding them in AIs, before the AI is deployed. Performers will create software that reads in an AI’s code and states the probability that the AI has a Trojan. Performers’ software will be tested against thousands of real AIs, with and without Trojans inside them. TrojAI will initially focus on AIs created for simple image classification tasks (like the traffic sign example); if successful, TrojAI will then expand to examine AIs from other problem domains, such as audio or text classification.

Registration Information

Attendees must register no later than 5pm EST on February 20, 2019 at Directions to the conference facility and other materials will be available on that website. No walk-in registrations will be allowed.

Due to space limitations, physical attendance will be limited to the first 125 registrants and to no more than 2 representatives per organization. All attendees will be required to present a government-issued photo identification to enter the conference.

Additional Information

The morning session will include an overview of the program goals, technical challenges, and expected participation requirements. A description of how the solutions will be evaluated will be provided.

The afternoon will include a poster session to provide an opportunity for attendees to present their organizations' capabilities and to explore teaming arrangements. Attendees who wish to present organization capabilities for teaming opportunities may submit a request through the registration web site. Details on the poster format will be provided after approval to register for the conference has been granted. These presentations are not intended to solicit feedback from the Government, and Government personnel will not be present during the poster session.

This Proposers' Day is intended for participants who are eligible to compete on the anticipated Broad Agency Announcements (BAAs). Other Government Agencies, Federally Funded Research and Development Centers (FFRDCs), University Affiliated Research Centers (UARCs), or any other similar organizations that have a special relationship with the Government, that gives them access to privileged or proprietary information, or access to Government equipment or real property, will not be eligible to submit proposals to the anticipated BAAs nor participate as team members under proposals submitted by eligible entities. Such entities may participate in the Proposers’ Day.

Questions concerning registration can be sent to Questions regarding the SAILS program can be sent to Questions regarding the TrojAI program can be sent to

Questions concerning Conference and registration can be sent to Questions regarding the program can be sent to

Contracting Office Address

Office of the Director of National Intelligence
Intelligence Advanced Research Projects Activity
Washington, DC 20511
United States

Primary Point of Contact, SAILS

John Beieler
Program Manager

Primary Point of Contact, TrojAI

Jeff Alstott
Program Manager

Solicitation Status: OPEN

Proposers' Day Announcement on FedBizOpps
Proposers' Day Date: February 26, 2019

Proposers' Day Briefings

SAILS Proposers' Day Briefing