LINCing People, Vehicles, and Objects in Time and Place

April 16, 2024

The ubiquity of imagery produced by security cameras is difficult to measure. In fact, for CCTV security cameras alone, there are, globally, more than one billion in continuous operation1. Even for a single facility such as a school or office building with far fewer security cameras, there is still a vast amount of footage produced.

With so much video imagery, detecting or preventing threatening behavior or acts is often challenging for intelligence and security professionals. This is because discerning a specific person, object, or threat and its location from the reams of available imagery is an arduous and time-consuming affair. This can be exacerbated when a dramatic or life-threatening incident caught on video, like an active shooter, drives the need for proactive threat detection and timely forensic analyses, requiring significant manpower to review.

Moreover, diverse video sensors that could potentially help associate content across footage often do not collaborate with each other. Some automated analytics, ranging from motion detection to object classification and summarization, provide valuable tools for filtering video content. However, eventually, the burden falls on imagery analysts to review footage and identify threats.

To help ease this burden for the Intelligence Community (IC), the Intelligence Advanced Research Projects Activity (IARPA) has developed the Video Linking and Intelligence from Non-Collaborative Sensors (Video LINCS) program. Video LINCS aims to develop re-identification (reID) algorithms to autonomously associate objects across diverse, non-collaborative video sensor footage and map or geo-locate re-identified objects.

Video LINCS’ reID and geo-localization algorithms will be able to distill raw pixel data into spatio-temporal patterns to facilitate downstream analysis for anomalies and threats. This will be accomplished without any prior set of people, vehicles, or objects to be queried or gallery/library to serve as a reference for matching.

“The Video LINCS program will provide the IC with a critical new tool to help analysts make sense of mountains of disparate video imagery,” said Video LINCS Program Manager, Dr. Reuven Meth. “This will in time give the IC the ability to identify and mitigate threats that otherwise may go unnoticed.”

The Video LINCS program is planned as a 48-month research and development effort and is divided into three phases:

  • Phase 1 will demonstrate the feasibility of person reID and geolocation in video compilations;
  • During Phase 2, reID will expand to include vehicles; and
  • Finally, Phase 3 will focus on reID of generic objects, without any advance knowledge of the types of objects in the imagery.

As the program progresses through the phases, the types of video sensors and collection geometries will be expanded and the availability of auxiliary metadata such as video sensor location and orientation will decrease.

Video LINCS is in pre-solicitation status, so performers have not yet been selected. However, the program’s testing and evaluation partners have been chosen, which include MITRE, MIT Lincoln Laboratory, and the National Institute of Standards and Technology.

“Like all IARPA programs, Video LINCS is high-risk and high-payoff research that pushes the state of the science of what may be technically possible,” Dr. Meth said. “That said, I’m confident we’ll reach our goal of transitioning this capability—one that doesn’t exist today—into the hands of our IC partners.”



1Which City Has the Most CCTV Cameras? (

LINCing People, Vehicles, and Objects in Time and Place Logo