Crowdsourcing Evidence, Argumentation, Thinking and Evaluation (CREATE)
*NEW APPLICATION DEADLINE: The proposal due date for the initial round of selections has been changed to May 9, 2016*
The CREATE program seeks proposals to develop, and experimentally test, systems that use crowdsourcing and structured analytic techniques (STs) to improve analytic reasoning. These systems will help people better understand the evidence and assumptions that support—or conflict with—conclusions. Secondarily, they will also help users better communicate their reasoning and conclusions. STs hold promise for increasing the logical rigor and transparency of analysis. They can help reveal underlying logic and identify unstated assumptions. Yet they are not widely used in the Intelligence Community or elsewhere—possibly because current versions are cumbersome or require too much time. Crowdsourcing has the potential to solve these problems by dividing the labor, allowing dispersed groups of analysts to contribute information and ideas where they have comparative advantages. Crowdsourcing can help analysts identify and understand alternative hypotheses, arguments, and points of view. Crowdsourcing of structured techniques may facilitate rational deliberation by integrating different perspectives, so that analysis can effectively benefit from “crowd wisdom.”
Current practice. The Intelligence Community (IC), like many organizations, typically conducts analysis in traditional ways: individual analysts review information sources, think through issues, confer with colleagues, conduct their analysis, and embody their results in written products. This approach is time-tested, intuitive, and requires no special training in methods. But it has drawbacks. The WMD Commission noted, “Perhaps most troubling, we found an Intelligence Community in which analysts have a difficult time stating their assumptions up front, explicitly explaining their logic, and, in the end, identifying unambiguously for policymakers what they do not know.”
Current analysis and reporting tools provide little scaffolding to help users assess competing hypotheses, produce clear, well-supported judgments, or identify and overcome biases. Nor do they provide much support for explaining to others why those judgments were made, why seemingly plausible alternatives were rejected, and what major informational gaps remain.
Structured Techniques. A variety of STs have been devised with the goal of improving reasoning—for example, Argument Mapping, Analysis of Competing Hypotheses (ACH), and Bayesian Reasoning Networks. Some, but not all, STs involve software that visually represents the relationships among hypotheses, reasons, objections and evidence. Other approaches to STs might employ or combine wiki-editing, structured debates, a reasoning analogue of the World Health Organization’s (WHO) checklists, or similarly text-oriented approaches.
Intelligence analysts are trained in the use of some STs, and many have used one or more at least occasionally. But the routine use of STs is the exception rather than the rule. Often STs are difficult to employ: software implementation may not be user-friendly, and generating the full apparatus that an ST requires may be time-consuming, confusing, or difficult. Furthermore, complex STs may require extensive input in areas beyond an individual user’s particular expertise. User-friendly crowdsourcing has the potential to overcome some of these obstacles: analysts will be able to focus on components that most interest them and on which they are most knowledgeable, contributing when they are available and have something important to offer.
Contracting Office Address
Office of the Director of National Intelligence
Intelligence Advanced Research Projects Activity
Washington, DC 20511
Primary Point of Contact
Solicitation Status: CLOSED
Proposers' Day Date: June 30, 2015
BAA Release Date: February 16, 2016
BAA Question Period:
February 16, 2016 – March 15, 2016
Proposal Due Date: May 9, 2016
Proposers' Day Briefings