The U.S. government’s Intelligence Advanced Research Projects Activity (IARPA) is planning a pair of programs to prevent training data from being maliciously tampered with to turn artificial intelligence systems against their users.... One project, Trojans in Artificial Intelligence (TrojAI), seeks to create a warning system for machine-learning algorithm training data compromised by an adversary. That project was originally announced in December [2018], and industry has provided feedback on it. Details of the second project will be revealed in a draft announcement later this year, but [IARPA Director Stacey] Dixon said that it will focus on protecting the identity of people whose images have been used to train facial biometric algorithms.