satellite

Multi-View Stereo 3D Mapping Challenge

Can you develop the best Multi-View Stereo (MVS) 3D mapping algorithm for commercial satellite imagery?

The Master Contest is Now Over

Thank you for all your submissions!

July 2016 August 2016 September 2016 October 2016 November 2016
Explorer
Contest
Explorer
Judging
Master
Contest
Master
Judging
Winners
Announced
The Problem

The challenge will consist of two contests—Explorer and Master—which progress in level of difficulty. Winners have been selected for the Explorer Challenge and now the final Master Challenge is ready to be solved. Solvers will be asked to generate and submit an algorithm to convert high-resolution satellite images to 3D point clouds for the contest datasets.

To complete each contest, solvers will be provided with:

Full large images formatted as National Imagery Transmission Format (NITF) with sensor metadata encoded within each NITF file – provided to support those with software already using standard geospatial file formats
Much smaller cropped images in TIFF format with sensor metadata provided in ASCII text files – provided to better enable participation from a broad range of solvers
Explorer contest challenge area ground truth lidar and scoring code so that solvers can test their own solutions prior to submission
KML bounding box for each challenge area


Be Part of the Innovation

IARPA is conducting this Challenge to invite the broader research community of industry and academia, with or without experience in multi-view satellite imagery, to participate in a convenient, efficient and non-contractual way. IARPA’s use of a crowdsourcing approach to stimulate breakthroughs in science and technology also supports the White House’s Strategy for American Innovation, as well as government transparency and efficiency. The goals and objectives of the Challenge are to:

Promote and benchmark research in multiple view stereo algorithms applied to satellite imagery
Foster innovation through crowdsourcing and moving beyond current research limitations for 3D point clouds
Stimulate various communities to develop and enhance automated methods to derive accurate 3D point clouds from multi-view satellite imagery, including computer vision, remote sensing and photogrammetry
Cultivate and sustain an ongoing collaborative community dedicated to this technology and research
The Challenge

Generate and submit an algorithm to convert high resolution satellite images to 3D point clouds for the contest datasets.

How to Enter

The Master Contest is now over.

View the final standings at topcoder.com.

Total Prize Pool
$100,000



Prizes

Throughout the Challenge, an online leaderboard will display solvers’ rankings and accomplishments, giving them various opportunities to have their work viewed and appreciated by stakeholders from industry, government and academic communities. Solvers will be eligible to win cash prizes during the Explorer and Master challenges from a total prize purse of $100K. Winners will be invited to present their solutions at a conference hosted by IARPA in the fall, where they will detail their approach.


Master Contest Winners
1st Place carlito Carlo de Franchis, Gabriele Facciolo, & Enric Meinhardt-Llopis
2nd Place psyho Przemysław Dębiak
3rd Place sdrdis Sébastien Drouyer
4th Place qinrj321 Rongjun Qin
5th Place kbatsos Konstantinos Batsos
Open Source Incentives View the submissions below!
Explorer Contest Winners
1st Place sdrdis Sébastien Drouyer
2nd Place carlodef Carlo de Franchis, Gabriele Facciolo, & Enric Meinhardt-Llopis
3rd Place KevinLaTourette Kevin LaTourette
4th Place kbatsos Konstantinos Batsos
5th Place all_random Viet Cuong
Best
Feedback
nofto Peter Novotny
Open Source Solutions

The following information has been provided by winners who have chosen to accept the Open Source Incentives. Ground truth data provided to all participants has also been made available to the public by the John Hopkins University Applied Physics Lab


Open Source Solution from sdris

This solution can evaluate a 3d map from a set of satellite images

What must be provided:
  • A set of satellite images
  • A list of suitable pairs of images
  • A KML file
What is generated:
A TXT File containing 3d point positions or A NPZ (numpy) file containing a height map, color map, and confidence metric

View on GitHub

Open Source Solution from carlito

This solution implements a stereo pipeline, which produces elevation models from images taken by high resolution optical satellites such as Pléiades, WorldView, QuickBird, Spot or Ikonos. It generates 3D point clouds and digital surface models from stereo pairs (two images) or tri-stereo sets (three images) in a complete automatic fashion.

View on GitHub

Open Source Solution from Psyho

This solution is built around the NASA Ames Stereo Pipeline, which produces very good point clouds from satellite imagery, but unfortunately its core functionality is limited to stereogrammetry. This solution demonstrates how the Stereo Pipeline can be altered to obtain point clouds from promising pairs of images. An algorithm is then used to merge all of the point clouds, which directly produces the final result.

View on GitHub

JHU APL's Multiple View Stereo Benchmark for Satellite Imagery

The John Hopkins University Applied Physics Lab has provided a public benchmark data set for multiple view stereo applied to 3D outdoor scene mapping using commercial satellite imagery. This data supported the IARPA Multi-View Stereo 3D Mapping Challenge and is now made publicly available with no restrictions to support continued research.

Learn More



Rules

To be considered for provisional evaluation, all submissions MUST satisfy the following requirements:

File format must be ASCII text.
For the Master contest, we are replacing the LAZ output file format required for the Explorer contest with ASCII text, which is much simpler for solvers to implement.
Text point cloud files can also be read by standard point cloud visualization software just like LAZ, so solvers from the Explorer contest may still use the same tools to verify their solutions. While text files are much larger than binary files, the Master contest polygons are small enough for this to be manageable.
Point coordinates must be:
In the Universal Transverse Mercator (UTM) projection with horizontal datum World Geodetic System 1984 (WGS84).
This can be verified by viewing both the ASCII text point cloud file and the KML file in the free QT Reader http://appliedimagery.com/download/ or comparable software.

Absolute accuracy of each 3D point cloud must be less than 20 meters horizontal to ensure reliable registration with and comparison to ground truth.
This can be verified using the provided KML file and free QT Reader and Google Earth or comparable software.
Open Source Libraries are permitted.
Solvers may use Open Source Libraries, including GNU General Public License (GPL) libraries, in their solution.

Example files satisfying these criteria will be provided. Solvers will be automatically notified of any noncompliant submissions.

To be eligible for prize awards and ranking on the final leaderboard, the top ranking solvers on the provisional leaderboard must provide fully automated executable software to allow for independent verification of software performance and the metric quality of the output data. To support independent verification, software must read a single KML file to specify bounding box coordinates, read specified image files (solvers may elect to use a subset of the images), and output an ASCII text file with X, Y, Z, and intensity values. If not provided, set intensity values in the file to zero. Software provided will be used for no other purpose and will be returned or destroyed upon completion of the evaluation.




Judging Criteria

Participants are asked to develop a Multi-View Stereo (MVS) solution to generate dense 3D point clouds using the commercial satellite images provided. Each submitted point cloud will be registered to ground truth using X, Y, and Z translational search and then compared to determine accuracy and completeness metric scores.

Accuracy is defined as the Root Mean Squared (RMS) error measured in meters compared to ground truth.
Completeness is defined as the fraction of points with Z error less than 1 meter compared to ground truth.

Each submission must have completeness greater than 0.5 to be considered for final ranking and eligibility for prize award.

Resources

Open source software and other stereo benchmarks are publicly available as useful references for solvers interested in learning more about multi-view stereo and commercial satellite imagery as they complete the Challenge.


NASA’s Ames Stereo Pipeline (ASP)

Provides a reference for 3D mapping using Digital Globe stereo pairs – though not with large numbers of images – using a combination of traditional photogrammetry and computer vision techniques.

Learn More

KITTI Vision Benchmark Suite

Uses autonomous driving platform Annieway to develop novel challenging real-world computer vision benchmarks. Tasks of interest are: stereo, optical flow, visual odometry, 3D object detection and 3D tracking.

Learn More

Docker Engine

Click here to go to the engine page for starting Docker preparations.

Click here for directions on how to use Docker.

TU-Darmstadt’s Multi-View Environment (MVE)

Provides a reference for structure-from-motion, multi-view-stereo, surface reconstruction, and surface texturing algorithms using frame cameras – though not with satellite imagery.

Learn More

PROVisG MARS 3D Challenge

Aims at testing and improving the state of the art in visual odometry and 3D terrain reconstruction in planetary exploration.

Learn More

Geospatial Data Abstraction Library

Used to read NITF files and image metadata

Learn More

LizardTech’s GeoViewer

A free tool for viewing images. GeoViewer supports NITF files used for this challenge

Learn More

Middlebury Two-View Stereo Evaluation

Contains an on-line evaluation of current algorithms, many stereo datasets with ground-truth disparities, Middlebury’s stereo correspondence software and an on-line submission script that allows you to evaluate your stereo algorithm in Middlebury’s framework.

Learn More

Applied Imagery’s QT Reader

A free tool for viewing 3D point clouds and other geospatial data. QT Reader supports LAS, LAZ, KML, and NITF files used for this challenge.

Learn More

Middlebury Multi-View Stereo Evaluation

Presents a quantitative comparison of several multi-view stereo reconstruction algorithms on six benchmark datasets.

Learn More

Training Data

Explore training data from the Johns Hopkins University Applied Physics Laboratory contained in Docker to get some practice in!

Coming Soon!



The Intelligence Advanced Research Projects Activity (IARPA) invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the Intelligence Community (IC).

Be Part Of The Innovation


IARPA Logo