Can you develop the best Multi-View Stereo (MVS) 3D mapping algorithm for commercial satellite imagery?
Thank you for all your submissions!
|July 2016||August 2016||September 2016||October 2016||November 2016|
The challenge will consist of two contests—Explorer and Master—which progress in level of difficulty. Winners have been selected for the Explorer Challenge and now the final Master Challenge is ready to be solved. Solvers will be asked to generate and submit an algorithm to convert high-resolution satellite images to 3D point clouds for the contest datasets.
To complete each contest, solvers will be provided with:
IARPA is conducting this Challenge to invite the broader research community of industry and academia, with or without experience in multi-view satellite imagery, to participate in a convenient, efficient and non-contractual way. IARPA’s use of a crowdsourcing approach to stimulate breakthroughs in science and technology also supports the White House’s Strategy for American Innovation, as well as government transparency and efficiency. The goals and objectives of the Challenge are to:
Generate and submit an algorithm to convert high resolution satellite images to 3D point clouds for the contest datasets.
The Master Contest is now over.
View the final standings at topcoder.com.
Throughout the Challenge, an online leaderboard will display solvers’ rankings and accomplishments, giving them various opportunities to have their work viewed and appreciated by stakeholders from industry, government and academic communities. Solvers will be eligible to win cash prizes during the Explorer and Master challenges from a total prize purse of $100K. Winners will be invited to present their solutions at a conference hosted by IARPA in the fall, where they will detail their approach.
The following information has been provided by winners who have chosen to accept the Open Source Incentives. Ground truth data provided to all participants has also been made available to the public by the John Hopkins University Applied Physics Lab
Open Source Solution from sdris
Open Source Solution from carlito
This solution implements a stereo pipeline, which produces elevation models from images taken by high resolution optical satellites such as Pléiades, WorldView, QuickBird, Spot or Ikonos. It generates 3D point clouds and digital surface models from stereo pairs (two images) or tri-stereo sets (three images) in a complete automatic fashion.View on GitHub
Open Source Solution from Psyho
This solution is built around the NASA Ames Stereo Pipeline, which produces very good point clouds from satellite imagery, but unfortunately its core functionality is limited to stereogrammetry. This solution demonstrates how the Stereo Pipeline can be altered to obtain point clouds from promising pairs of images. An algorithm is then used to merge all of the point clouds, which directly produces the final result.View on GitHub
JHU APL's Multiple View Stereo Benchmark for Satellite Imagery
The John Hopkins University Applied Physics Lab has provided a public benchmark data set for multiple view stereo applied to 3D outdoor scene mapping using commercial satellite imagery. This data supported the IARPA Multi-View Stereo 3D Mapping Challenge and is now made publicly available with no restrictions to support continued research.Learn More
To be considered for provisional evaluation, all submissions MUST satisfy the following requirements:
To be eligible for prize awards and ranking on the final leaderboard, the top ranking solvers on the provisional leaderboard must provide fully automated executable software to allow for independent verification of software performance and the metric quality of the output data. To support independent verification, software must read a single KML file to specify bounding box coordinates, read specified image files (solvers may elect to use a subset of the images), and output an ASCII text file with X, Y, Z, and intensity values. If not provided, set intensity values in the file to zero. Software provided will be used for no other purpose and will be returned or destroyed upon completion of the evaluation.
Participants are asked to develop a Multi-View Stereo (MVS) solution to generate dense 3D point clouds using the commercial satellite images provided. Each submitted point cloud will be registered to ground truth using X, Y, and Z translational search and then compared to determine accuracy and completeness metric scores.
Each submission must have completeness greater than 0.5 to be considered for final ranking and eligibility for prize award.
Open source software and other stereo benchmarks are publicly available as useful references for solvers interested in learning more about multi-view stereo and commercial satellite imagery as they complete the Challenge.
NASA’s Ames Stereo Pipeline (ASP)
Provides a reference for 3D mapping using Digital Globe stereo pairs – though not with large numbers of images – using a combination of traditional photogrammetry and computer vision techniques.Learn More
KITTI Vision Benchmark Suite
Uses autonomous driving platform Annieway to develop novel challenging real-world computer vision benchmarks. Tasks of interest are: stereo, optical flow, visual odometry, 3D object detection and 3D tracking.Learn More
TU-Darmstadt’s Multi-View Environment (MVE)
Provides a reference for structure-from-motion, multi-view-stereo, surface reconstruction, and surface texturing algorithms using frame cameras – though not with satellite imagery.Learn More
PROVisG MARS 3D Challenge
Aims at testing and improving the state of the art in visual odometry and 3D terrain reconstruction in planetary exploration.Learn More
Geospatial Data Abstraction Library
Used to read NITF files and image metadataLearn More
A free tool for viewing images. GeoViewer supports NITF files used for this challengeLearn More
Middlebury Two-View Stereo Evaluation
Contains an on-line evaluation of current algorithms, many stereo datasets with ground-truth disparities, Middlebury’s stereo correspondence software and an on-line submission script that allows you to evaluate your stereo algorithm in Middlebury’s framework.Learn More
Applied Imagery’s QT Reader
A free tool for viewing 3D point clouds and other geospatial data. QT Reader supports LAS, LAZ, KML, and NITF files used for this challenge.Learn More
Middlebury Multi-View Stereo Evaluation
Presents a quantitative comparison of several multi-view stereo reconstruction algorithms on six benchmark datasets.Learn More
Explore training data from the Johns Hopkins University Applied Physics Laboratory contained in Docker to get some practice in!
The Intelligence Advanced Research Projects Activity (IARPA) invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the Intelligence Community (IC).