Prize Challenges

Credibility Assessment Standardized Evaluation (CASE) Challenge

Every day we make decisions about whether the people and information sources around us are reliable, honest, and trustworthy – the person, their actions, what they say, a particular news source, or the actual information being conveyed. Often, the only tool to help us make those decisions are our own judgments based on current or past experiences.

For some in-person and virtual interactions there are tools to aid our judgments. These might include listening to the way someone tells a story, asking specific questions, looking at a user badge or rating system, asking for confirming information from other people - or in more formal settings, verifying biometrics or recording someone’s physiological responses, such as is the case with the polygraph. Each of these examples uses a very different type of tool to augment our ability to evaluate credibility. Yet there are no standardized and rigorous tests to evaluate how accurate such tools really are.

Countless studies have tested a variety of credibility assessment techniques and have attempted to use them to rigorously determine when a source and/or a message is credible and, more specifically, when a person is lying or telling the truth. Despite the large and lengthy investment in such research, a rigorous set of valid methods that are useful in determining the credibility of a source or their information across different applications remains difficult to achieve. The Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI), intends to launch the Credibility Assessment Standardized Evaluation (CASE) Challenge to address this critical question.

CASE logo Square

The Challenge

Participant's Day WebEx Resources

Presentation Slides

Meeting Video

Meeting Q&A

IARPA's CASE Challenge seeks novel methods to measure the performance of credibility assessment techniques and technologies. Credibility assessment refers to both the assessment of the truthfulness of specific claims and to the assessment of the reliability, honesty, and trustworthiness of a source of a particular claim, whether that be an individual, group, or a broader organization or entity.

This challenge is focused on the methods used to evaluate credibility assessment techniques or technologies, rather than on the techniques or technologies themselves. In this context, a method is a detailed plan or set of actions that can be easily replicated or followed.

In this challenge, we ask that your solution is a method for conducting a study, which includes background information, the objectives of the research, study design, the logistics and means for running the study, and details about what data would be collected if your solution were implemented.

The methodology you present in your solution, that is the way the solution’s method is implemented, will impact the type of information or personal attributes being evaluated. Depending on the approach, the motivations to be credible, as well as subsequent penalties, will vary. In some cases the ground truth about an individual’s credibility or the credibility of their information will be difficult, if not impossible, to establish.

This highlights one of the roadblocks to testing a new credibility assessment method for use in practical applications - there is no universally accepted method by which to establish the performance of a new technique or compare across methods.

A key difficulty in validating that a new method can be used for practical, everyday purposes is in the attempt to move research findings into the real world. Several limitations exist for current methods including, but not limited to:

  1. Simplistic Procedures: In order to achieve clear results, research studies are often simplistic and highly controlled, which does not reflect the complexity of the conditions under which credibility is often assessed in real-world scenarios.
  2. Artificial Motivations: Decisions to be credible or not credible are often assigned, or otherwise controlled, and limit a participant’s ability to determine for themselves if they want to be credible, when, and why.
  3. Low Psychological Realism: There is very little personal investment of the participant in the outcomes of the study, leading to low motivation to behave in natural and authentic ways. And related to this, studies typically lack significant penalties if an individual fails to be judged as credible. The lack of meaningful incentives further exacerbates low intrinsic motivation in participants.

These limitations are particularly impactful when considering applications for national security, where an individual may feel that their livelihood, core values and beliefs, safety, or freedom are in jeopardy if someone doesn’t believe that they are credible. It is difficult to build such motivation and jeopardy into a replicable methodology that is safe, ethical, AND could be used as a common standard across some or all credibility assessment applications.

The CASE Challenge is the first concerted effort to invite interested individuals to develop credibility assessment evaluation methods that can be used to objectively evaluate both existing and future credibility assessment techniques/technologies. In doing this the CASE Challenge strives to incentivize a broad range of new ideas, while still ensuring their utility to real-world applications. To meet this goal, a scoring panel of experts will evaluate each solution based on the background and strength of the methods, how well it reflects realistic conditions, how creative and clever it is compared to currently used methods like the mock crime scenario, and how well it ensures the responsible care and consideration of participants.

Incentive Structure

The CASE Challenge is broken up into two stages. During the first stage, all eligible solutions will be evaluated, and the top five solutions will be selected to move on to the second stage. These five finalists, or Credibility Champions, will pitch their solutions to a panel of judges at the CASE Challenge Workshop in Washington, D.C. in July 2019, where the Grand Prize winners will be selected.

Additional Information

Questions concerning the challenge, Participants’ Day WebEx, or registration can be sent to CASEChallenge@iarpa.gov.

When does pre-registration begin? December 2, 2018
When does the challenge begin? January 2, 2019
How do I stay connected to get information about the challenge? send a note to CASEChallenge@iarpa.gov
Where do I learn more specifics of the challenge? https://www.iarpa.gov/challenges/casechallenge.html
When are submissions due? March 31, 2019
When are winners announced? July 2019

Related Program(s):