An NGA Small Business Innovation Research (SBIR) partner is well on its way to creating an artificial intelligence-powered prototype to provide global mapping at near real-time in unprecedented resolution. AI-generated maps in 10-meter resolution are already available to NGA’s analysts and customers.
NGA SBIR partner Impact Observatory, along with Esri and Microsoft made history in summer 2021 by launching the first publicly available land-cover map at 10-meter resolution. The map was the first to use machine learning and big data to update land-cover maps created from human-tagged satellite imagery. It was produced with imagery from Europe’s Sentinel satellite constellation, with funding from Esri and donated server space and computer from Microsoft.
In a current partnership with NGA that commenced in July 2022 and runs through January 2024, the company is developing an AI-powered prototype to provide global mapping and change monitoring even faster and with greater detail.
“A truly living map produced with deep learning is no longer a science fiction idea but is something you should come to expect,” said Steve Brumby, Ph.D., chief executive officer and chief technology officer of Impact Observatory, while sharing status of the research with members of NGA’s workforce last month.
Traditionally, human-dependent maps are updated over the course of years. The most current global land-cover map available from the U.S. Geological Survey was last updated in 2019. These new AI-generated maps can be updated whenever new imagery is available — with the ultimate goal of continuous, automated updates at a global scale. Impact Observatory’s AI-powered maps are already able to show seasonal changes, as well as impacts from events such as the war in Ukraine.
“I think this is a radical leap forward in tipping,” said Brumby. “We believe this just opens the universe for what can be done.”
The key is the ability to use machine learning and automation to process the huge amount of data now available from a growing number of commercial and small satellites, and even drone imagery — and to overlay images from different sources to build detail. The 10-meter maps currently apply a deep learning model to 2.4 million satellite scenes, requiring over 1 million CPU core hours to process. Of note, the model slims down the data to extract only what’s needed to create the maps.