Introducing SATLAS: A Revolutionary Tool for Climate Change Monitoring

A groundbreaking tool has been unveiled today, offering an unprecedented view of renewable energy projects and global tree coverage. SATLAS, developed by the Allen Institute for AI, leverages the power of generative AI to enhance images taken by space satellites, providing users with a clearer and more detailed perspective of our planet. This pioneering tool aims to assist in the fight against climate change by offering vital insights to policymakers and researchers concerned with environmental goals.

The distinctive feature of SATLAS is its utilization of a technique called “Super-Resolution.” By employing deep learning algorithms, the tool enhances low-resolution satellite images, filling in missing details and generating high-resolution representations. This enhances the clarity and accuracy of the images, enabling users to identify important features such as buildings, solar farms, wind turbines, and changes in tree canopy coverage.

Currently, SATLAS focuses on renewable energy projects and tree coverage worldwide, utilizing satellite imagery from the European Space Agency’s Sentinel-2 satellites. The data, which is updated monthly, encompasses most regions of the world, excluding parts of Antarctica and open oceans far from land. It offers valuable information for policymakers striving to meet climate-related targets and other environmental objectives.

While SATLAS is an innovative and expansive tool, developers acknowledge that there are still improvements to be made. Similar to other generative AI models, SATLAS is susceptible to occasional inaccuracies, which can be perceived as “hallucinations.” These inaccuracies arise from differences in architecture across regions, causing the model to incorrectly predict certain building shapes. Additionally, the tool may erroneously place vehicles and vessels in unexpected locations based on the training images.

The creation of SATLAS involved meticulous manual labeling of thousands of wind turbines, offshore platforms, solar farms, and tree cover percentages from satellite images. This extensive dataset facilitated the training of deep learning models to recognize these features independently. Furthermore, to achieve super-resolution capabilities, the models were fed numerous low-resolution images of the same location at different times, enabling predictions of sub-pixel details in high-resolution images.

Looking ahead, the Allen Institute plans to expand SATLAS to include a variety of maps, such as identifying global crop types. The objective is to establish a foundational model for monitoring our planet, allowing researchers from various disciplines to access AI predictions and investigate the impacts of climate change and other phenomena occurring on Earth.


1. What is SATLAS?

SATLAS is a revolutionary tool developed by the Allen Institute for AI that employs generative AI to enhance satellite images, providing a clearer view of renewable energy projects and global tree coverage.

2. How does SATLAS improve the images?

SATLAS utilizes a technique called “Super-Resolution,” which uses deep learning models to fill in missing details in low-resolution satellite images, generating high-resolution representations.

3. What insights does SATLAS offer?

SATLAS provides valuable insights into renewable energy projects, tree canopy coverage changes, and other environmental features. These insights are crucial for policymakers and researchers working towards climate and environmental goals.

4. What challenges does SATLAS face?

Despite its advancements, SATLAS may occasionally exhibit inaccuracies or “hallucinations” in its generated images, particularly in predicting building shapes and placing vehicles and vessels.

5. How was SATLAS developed?

SATLAS was created through manual labeling of various features from satellite images, allowing deep learning models to recognize these features independently. Additionally, the models were trained using multiple low-resolution images of the same location to predict high-resolution details.