Deep Intermodal Video Analytics (DIVA)
- Detection of primitive activities occurring in ground-based video collection; Examples include:
- Person getting into a vehicle,
- Person getting out of vehicle,
- Person carrying object.
- Detection of complex activities, including pre-specified or newly defined activities; Examples:
- Person being picked up by vehicle,
- Person abandoning object,
- Two people exchanging an object,
- Person carrying a firearm.
- Person and object detection and recognition across multiple overlapping and nonoverlapping camera viewpoints.
The focus for phase 1 will be on video collected with the following properties:
- Video collected within the human visible light spectrum;
- Video collected from indoor or outdoor security cameras, either fixed or with rigid motion such as pan-tilt-zoom.
In phases 2 and 3, additional data used will include:
- Video collected from handheld or body worn cameras;
- Video collected from other portions of the electromagnetic spectrum (e.g., infrared).
The DIVA program will produce a common framework and software prototype for activity detection, person/object detection and recognition across a multi-camera network. The impact will be the development of tools for forensic analysis, as well as real-time alerting for userdefined threat scenarios.
Collaborative efforts and teaming among potential performers will be encouraged. It is anticipated that teams will be multidisciplinary, and might include expertise in machine learning, deep learning or hierarchical modeling, artificial intelligence, object detection, recognition, person detection and re-identification, person action recognition, video activity detection, tracking across multiple non-overlapping camera viewpoints, 3D reconstruction from video, super-resolution, statistics, probability and mathematics. Performers will not be asked to build a monolithic system for activity detection and tracking across a large camera network. Instead, research will focus on developing a common scalable framework that deploys in an open cloud architecture for activity detection, person/object detection and recognition across overlapping and non-overlapping cameras.
IARPA anticipates that academic institutions and companies from around the world will participate in this program. Researchers will be encouraged to publish their findings in academic journals.
Contracting Office Address
Office of the Director of National Intelligence
Intelligence Advanced Research Projects Activity
Washington, DC 20511
Primary Point of Contact
Jack Cooper
Program Manager
dni-iarpa-baa-16-13@iarpa.gov
Solicitation Status: CLOSED
IARPA-BAA-16-13
Proposers' Day Date: July 12, 2016
BAA Release Date: September 13, 2016
BAA Question Period: September 13, 2016 - October 12, 2016
Proposal Due Date: November 7, 2016
Additional Information
- Program Description
- IARPA-BAA-16-13 Q&A (round one)
- IARPA-BAA-16-13 Q&A (round two)
- IARPA-BAA-16-13 Q&A (round two-final)
Proposers' Day Briefings
Applied Research Associates (presentation)
Arizona State University (presentation)
Raytheon BBN Technologies (poster)
Carnegie Mellon University (presentation)
Commonwealth Computer Research (poster)
Decisive Analytics (presentation)
Georgia Tech Research Institute (poster)
Northrop Grumman (presentation)
Purdue University (presentation)
Snap Network Surveillance (poster)