Forest Carbon Diligence Workflows

last updated: July 17, 2024

The Forest Carbon Diligence product is composed of a bundle of data resources: Canopy Height, Canopy Cover, and Aboveground Live Carbon at 30-meter spatial resolution. These data resources are produced annually over the entire landmass of the Earth (between 75° N and 60° S). We use an extensive library of airborne LiDAR to train deep learning models to predict canopy height and canopy cover from satellite imagery. We use these predictions paired with 11 million spaceborne lidar (GEDI) footprints to train a model to estimate aboveground live carbon. The archive currently extends back to 2013.

View the latest Planet Forest Carbon Diligence Technical Specification.

View the current Planet Forest Carbon Diligence Release Notes.

Workflow: Analyze Historical Trends of Forest Structure and Carbon in Shasta National Forest, California

Setting up Your Script and Connecting with Planet Services

To execute the code in this example, you will need the following:

  • A Planet API key
  • Access to the forest_carbon_diligence_30m data layer and associated data resources:
    • CANOPY_COVER_30m
  • Configured credentials for storage of the results to cloud storage (Google Cloud Platform, Amazon Web Services, Microsoft Azure, or Oracle Collaboration Suite)

The code examples in this workflow are written for Python 3.8 or greater. In addition the the Python standard library, the following packages are required:

- keyring
- rasterio
- requests
- rioxarray

Import necessary libraries and enter your API key. For example, the keyring package is used to store/retrieve your Planet API key. You are prompted to enter the key once. The API Key is securely stored on the system's keyring:

# Import requirements
import base64
import keyring
import rasterio
import requests
import rioxarray as rx
import xarray as xr
import os
import pandas as pd 
from io import StringIO

# Authentication
update = False # Set to True if you want to update the credentials in the system's keyring

if keyring.get_password("planet", "PL_API_KEY") is None or update:
    keyring.set_password("planet", "PL_API_KEY", getpass("Planet API Key: "))
    print("Using stored api key")

PL_API_KEY = keyring.get_password("planet", "PL_API_KEY")

Confirm your API key by making a call to Planet services. You should receive back an HTTP 200 response:

# Planet's Subscriptions API base URL for making restFUL requests

auth = requests.auth.HTTPBasicAuth(PL_API_KEY, '')
response = requests.get(BASE_URL, auth=auth)

Creating a Planetary Variables Subscription with the Subscriptions API

To create a subscription, provide a JSON request object that details the subscription parameters, including:

  • Subscription name (required)
  • Planetary Variable source type (required)
  • Data product ID (required)
  • Subscription location in GeoJSON format (required)
  • Start date for the subscription (required)
  • End date for the subscription (optional)

Refer to Create a Planetary Variables Subscription in Subscribing to Planetary Variables for details about available parameters.

Create your JSON Subscription Description Object

This example creates a subscription for ten years of 30 m canopy height data over Shasta National Forest in California.

To confirm if the provided geometry fits into a specific Area of Access, refer to the following code example.

Subscriptions can be created with or without a delivery parameter, which specifies a storage location to deliver raster data. Omitting the delivery parameter will create a metadata-only subscription. This example creates a subscription with a delivery parameter to deliver results directly to a Google Cloud storage bucket.

Refer to the Google Cloud documentation to create a service account key. Use the appropriate credentials for AWS, Azure, or Oracle Cloud Storage platforms.

Ensure that a delivery destination has been set up

The Subscriptions API supports delivery to cloud storage providers like Amazon S3, Microsoft Azure Blob Storage, Google Cloud Storage, or Oracle Cloud Storage. For any cloud storage delivery option, create a cloud storage account with both write and delete access. The Subscriptions API supports delivery to a Sentinel Hub collection as well.

# Read Google application credentials key into memory

    credentials_path = os.path.abspath(GOOGLE_APPLICATION_CREDENTIALS)
    print(f"No Google service account key found at: {credentials_path}")

# Subscriptions API expects credentials in base64 format
    gcs_credentials_base64 = base64.b64encode( 

# Create a new subscription JSON payload
payload = {
    "name": "CANOPY_HEIGHT_v1.1.0_30 - Shasta NF",
    "source": {
        "type": "forest_carbon_diligence_30m",
        "parameters": {
    "id": "CANOPY_HEIGHT_v1.1.0_30",
    "start_time": "2013-01-01T00:00:00Z",
    "end_time": "2023-01-01T00:00:00Z",
    "geometry": {
    "type": "Polygon",
    "coordinates": [[
        [-123.39412734481135, 40.53806314480528],
        [-123.39412734481135, 40.53399674816484],
        [-123.38833323662753, 40.53399674816484],
        [-123.38833323662753, 40.53806314480528],
        [-123.39412734481135, 40.53806314480528]
    "delivery": {
        "type": "google_cloud_storage",
        "parameters": {
            "bucket": "{your_storage_bucket}",
            "credentials": gcs_credentials_base64

Create a subscription Using Your JSON Description Object

These details are sent to the Subscriptions API to create a new subscription and receive it's unique subscription ID.

def create_subscription(subscription_payload, auth):
    headers = {
        "content-type": "application/json"
        response =, json=payload, auth=auth, headers=headers)
    except requests.exceptions.HTTPError:
        print(f"Request failed with {response.text}")
        response_json = response.json()
        subscription_id = response_json["id"]
        print(f"Successfully created new subscription with ID={subscription_id}")
        return subscription_id

### Create a new subscription
subscription_id = create_subscription(payload, auth)

Confirm the Subscription Status

To retrieve the status of the subscription, request the subscription endpoint with a GET request. Once it is in a 'running' or 'completed' state, the delivery should either be in progress or completed, respectively. A subscription with an end date in the future remains in 'running' state until the 'end_date' is in the past. See status descriptions for a complete overview of possible status descriptions.

def get_subscription_status(subscription_id, auth):
    subscription_url = f"{BASE_URL}/{subscription_id}"
    response = requests.get(subscription_url, auth=auth)
    response_json = response.json()
    return response_json.get("status")

status = get_subscription_status(subscription_id, auth)

Retrieving and analyzing the subscription data

Metadata results generated for this subscription can be retrieved directly in CSV format.

Retrieve results data in CSV format

# Retrieve the resulting data in CSV format.
resultsCSV = requests.get(f"{BASE_URL}/{subscription_id}/results?format=csv", auth=auth)

# Read CSV Data
df = pd.read_csv(StringIO(resultsCSV.text), parse_dates=["item_datetime", "local_solar_time"])

# Filter by valid data only
df = df[df[""].notnull()]
df = df[df[""] > 0]
df = df[df["status"] != 'QUEUED']


Retrieving the GeoTIFF

The rioxarray to rasterio can be used to open and map the delivered GeoTIFF files directly from their cloud storage location.

There are many options for configuring access through the different cloud storage services. Rasterio uses GDAL under the hood and the configuration options for network based file systems, such as the following:

  • Amazon Web Services
  • Google Cloud
  • Microsoft Azure

The following example reads data directly from the Google Cloud Storage bucket configured previously. To work with canopy cover instead of height, modify the file_location variable to point to canopy cover files.

# Set the filepath of the GeoTIFF asset
file_location = f"gs://{your_bucket_name}/{subscription_id}/{year}/01/01/CANOPY_HEIGHT_v1.1.0_30-{year}0101T0000_ch.tiff"

# Use Google application credentials to allow access to the storage location
    data = rx.open_rasterio(file_location)

Plot the GeoTIFF

You can visualize the resulting raster via:


Visualizing Multiple Years of Data

To visualize a time series, load in the annual rasters and concatenate along the time dimension:

years = [2020, 2021, 2022]

year_data = []

  for year in years:        
    f = f"gs://{your_bucket_name}/{subscription_id}/{year}/01/01/CANOPY_HEIGHT_v1.1.0_30-{year}0101T0000_ch.tiff"       
    year_data.append(rx.open_rasterio(f, mask_and_scale=True).assign_coords({"year": year}))

timeseries = xr.concat(year_data, dim="year")

To visualize the linear trend over time for each pixel, you can use xarrays's polyfit() method. For example:

fit = timeseries.polyfit(dim=”year”, deg=1)
slopes = fit["polyfit_coefficients"].sel(degree=1)

Estimating Total Carbon for an Area of Interest (AOI)

To estimate total carbon for an AOI, read the carbon data, select data from without our AOI, and sum over pixels.

# read carbon data
c_location = f"gs://{your_bucket_name}/{subscription_id}/{year}/01/01/ABOVEGROUND_CARBON_DENSITY_v1.1.0_30-{year}0101T0000_acd.tiff" 

    carbon = rx.open_rasterio(c_location)

# define a geometry for the area of interest
xmin = -123.39
xmax = -123.38
ymin = 40.535
ymax = 40.536

aoi = [
        'type': 'Polygon',
        'coordinates': [[
            [xmin, ymin],
            [xmin, ymax],
            [xmax, ymax],
            [xmax, ymin],
            [xmin, ymin]

# clip carbon data to the AOI
aoi_carbon =

# compute total carbon
total_carbon = aoi_carbon.sum()



Learning Resources

Get the details in the Forest Carbon Technical Specification. Read about Planet APIs at Get Started with Planet APIs. Find a collection of guides and tutorials on Planet University. Also, checkout Planet notebooks on GitHub, such as the Subscriptions tutorials: subscriptions_api_tutorial.


Rate this guide: