The Intelligence Advanced Research Projects Activity (IARPA) often selects its research efforts through the Broad Agency Announcement (BAA) process. This request for information (RFI) is intended to provide information relevant to a possible future IARPA program, so that feedback from potential participants can be considered prior to the issuance of a BAA. Respondents are invited to provide comments on the content of this announcement to include suggestions for improving the scope of a possible solicitation to ensure that every effort is made to adequately address the scientific and technical challenges described below.

Responses to this request may be used to support development of, and subsequently be incorporated within, a future IARPA Program BAA and therefore must be available for unrestricted public distribution. The following sections of this announcement contain details of the scope of technical efforts of interest, along with instructions for the submission of responses.

Background & Scope

IARPA is interested in methods that can be used to generate probabilistic forecasts for events of interest to national security decision makers, including political, social, and military events on one-month to one-year time horizons. IARPA is soliciting submissions on the following topics:

1. Forecasting methods with broad applicability to events of interest that:

  • Generate probabilistic forecasts ("there is a probability P of event E in time period T") that can later be tested against observed events for calibration and discrimination
  • Generate conditional "what if" forecasts ("if action A is taken, there is a probability P of causing event E in time period T"), across a range of possible actions, one branch of which can later be tested against observed events
  • Apply to forecasting problems involving sparse and/or poorly-structured data
  • Have empirical support from published evaluations of accuracy -- particularly accuracy relative to at least one other forecasting method, on the same forecasting problem
  • Are more accurate than individual expert judgment and group deliberation by experts
  • Are not already reviewed in Armstrong1 or for which there have been subsequent changes in evidence or theory
  • Do not require sophisticated statistical expertise among users
  • Can be used in online environments

2. Methods to generate accurate probabilistic judgments from large, widely-dispersed groups of human experts that are amenable to web-based elicitations, that generate more accurate estimates than the unweighted average of individual expert judgments, and that require at most a few minutes of time per elicitation per expert. These methods could include, but are not limited to:

  • Mathematical aggregation of judgments, such as weighting judgment by confidence, expertise, performance on seed questions, cognitive style, or other predictors of accuracy
  • Behavioral aggregation of judgments, such as Delphi
  • In addition, we are interested in evidence for techniques that improve the accuracy of these methods, including techniques for:
  • Selecting experts
  • Improving recruitment and participation in judgment exercises
  • Training in calibration exercises
  • Stating clear questions that pass the clairvoyance test2
  • Decomposition
  • Representing likelihoods using frequencies, odds, or figures
  • Justifying estimates with reasons or arguments
  • Non-monetary incentives for information discovery and honest reporting, including "Bayesian Truth Serum" approaches
  • Evaluation of judgments using proper scoring rules
  • Feedback on group judgment and/or individual accuracy

3. Experimental designs to evaluate the methods described in topics 1 and 2 that:

  • Employ conditions representative of all-source intelligence analysis
  • Are validated by prior results indicating that the effects on performance under experimental conditions are similar to the effects on performance when working with real-world data under real-world conditions
  • In the absence of established empirical validity, demonstrate strong face validity and suggest how validity can be empirically established
  • Are compatible with various outcome measures, including calibration and discrimination of probability estimates, time required to generate estimates, internal consistency, and clarity of analysis3

The responses to this RFI may justify a multi-year competitive program.

Preparation Instructions to Respondents

IARPA solicits respondents to submit ideas related to this topic for use by the Government in formulating a potential program. IARPA requests that submittals briefly and clearly describe the potential approach or concept, outline critical technical issues, and comment on the expected performance, robustness, and estimated cost of the proposed approach. This announcement contains all of the information required to submit a response. No additional forms, kits, or other materials are needed.

IARPA appreciates responses from all capable and qualified sources from within and outside of the US. Because IARPA is interested in an integrated approach, responses from teams with complementary areas of expertise are encouraged. Responses have the following formatting requirements:

  1. A one page cover sheet that identifies the title, organization(s), respondent's technical and administrative points of contact - including names, addresses, phone and fax numbers, and email addresses of all co-authors, and clearly indicating its association with IARPA-RFI-10-01
  2. A substantive, focused, one-half page executive summary;
  3. A description (limited to 5 pages in minimum 12 point Times New Roman font, appropriate for single-sided, single-spaced 8.5 by 11 inch paper, with 1-inch margins) of the technical challenges and suggested approach(es);
  4. A list of citations (any significant claims or reports of success must be accompanied by citations, and reference material MUST be attached);
  5. Optionally, a single overview briefing chart graphically depicting the key ideas.


Disclaimers and Important Notes

This is an RFI issued solely for information and new program planning purposes and does not constitute a solicitation.

Respondents are advised that IARPA is under no obligation to acknowledge receipt of the information received, or provide feedback to respondents with respect to any information submitted under this RFI.

Responses to this notice are not offers and cannot be accepted by the Government to form a binding contract. Respondents are solely responsible for all expenses associated with responding to this RFI. It is the respondents' responsibility to ensure that the submitted material has been approved for public release by the organization that funded whatever research is referred to in their response.

The Government does not intend to award a contract on the basis of this RFI or to otherwise pay for the information solicited, nor is the Government obligated to issue a solicitation based on responses received. Neither proprietary nor classified concepts or information should be included in the submittal. Input on technical aspects of the responses may be solicited by IARPA from non-Government consultants/experts who are bound by appropriate non-disclosure requirements.


  1. Armstrong JS, ed., (2005), Principles of Forecasting, Norwell, MA: Kluwer.
  2. Tetlock PE, (2005), Expert Political Judgment: How Good Is It? How Can We Know? Princeton, NJ: Princeton University Press.
  3. Rieber S, (2004), Intelligence Analysis and Judgmental Calibration, International Journal of Intelligence and Counterintelligence 17: 97-112.

For information contact:


Responses Due: March 1, 2010