Fighting Fires Together: xView 2 Prize Challenge Helps Automate Damage Assessments

Earthquakes, volcanic eruptions, floods and wildfires are formidable natural phenomena that require rapid but strategic responses to prevent significant losses to life as well as property. Earth satellite data can enable responders to identify the location, cause and severity of impacts, including property damage. However, these natural disasters often impact large areas, and the manual process of searching through extensive imagery to pinpoint the location of and assess the amount of damage is slow and labor-intensive. Trained analysts who examine the images have to integrate their knowledge about an area’s geography, as well as the specific disaster’s conditions to score building damage. How can this process be sped up to bring accurate information to disaster responders more quickly?

NASA Earth Applied Sciences’ Open Innovation and Disasters program area joined the Defense Innovation Unit (DIU), part of the Department of Defense, to address just this problem, and they invited the expertise of one key group – the public! DIU’s xView2 Challenge invited image analysts and computer vision experts to participate in an open, international prize competition to create machine learning algorithms that could process pre- and post-natural disaster imagery to assess building damage.

​ ​The competition used two satellite photographs of the city of Santa Rosa, California before and after a 2017 fire, the top left and center images. On the top right is actual damage assessment conducted by remote sensing experts reviewing satellite imag

​The competition used two satellite photographs of the city of Santa Rosa, California before and after a 2017 fire, the top left and center images. On the top right is actual damage assessment conducted by remote sensing experts reviewing satellite imagery. Images on the bottom row are the result of the competition. Models 1 to 5 were the top five winning challenge participants’ algorithms. In all images, red shows destroyed structures, orange shows major damage, blue is minor damage and green is undamaged.

The challenge was supported by a large dataset, xBD, which consists of 850,000 building annotations across more than 45,000 km2 of imagery spanning 10+ countries and six different disaster types. Participants were given a baseline machine learning model created by Carnegie Mellon University’s Software Engineering Institute (CMU SEI) to begin their algorithm development, but were also free to create their own path to automatically classify building damage in the curated satellite imagery.

U.S. and international participants, including 505 different teams from various colleges and universities, submitted more than 2,000 solutions against the xView2 public leaderboard. The top three winning solutions were able to accurately detect locations of damaged buildings, scoring them as undamaged, having minor or major damage, or destroyed, over 80% of the time.  The submissions of these winning solutions can be found at the resource website for the project, DIUx-x View.