Tanager Overview

last updated: November 26, 2024

Constellation & Sensor Overview

The Tanager satellite is an imaging spectrometer capturing approximately 424 Bands with ~5 nm spacing between an approximate range of 400-2500 nm at 30 meters per pixel resolution, with the first launched in August 2024. Planet intends to launch additional Tanager satellites to meet market demands over the coming years.

Each satellite is 3-axis stabilized and agile enough to slew between different targets of interest. Each satellite has a single electric propulsion (EP) thruster for orbital control, along with four reaction wheels and three magnetic torquers for attitude control.

All Tanagers contain three-mirror anastigmat (TMA) telescopes with a focal length of 400 mm and Dyson form spectrometers, with a 640x480 pixel Mercury-Cadmium-Telluride (MCT) detector as the focal plane array.

Sensitivity Collection Modes

Tanager's hyperspectral imaging employs a minimum integration time of 8 ms, resulting in an approximately 58 m along-track pixel length in pushbroom mode, which causes pixels in geometrically corrected images to be rectangular, and for most use cases this pixel stretching is generally an undesirable effect.

To mitigate this elongation and enhance signal-to-noise ratio (SNR), Tanager utilizes a technique called 'back-nodding.' This maneuver allows imaging to begin before and continue after the satellite passes directly over the target area, extending capture time and improving SNR for smaller regions. Find the table summary below describing each mode's minimum to maximum length.

TANAGER SENSITIVITY MODE DIMENSIONS
Mode Minimum to Maximum Collection Length (km) Width
Maximum Sensitivity (4x8ms) 18 to 65.9 km 18 km
High Sensitivity (3x8ms) 18 to 91.5 km 18 km
Medium Sensitivity (2x8ms) 18 to 153.1 km 18 km
Standard Sensitivity (1x8ms) 18 to 481.2 km 18 km
Glint (1x8) 18 to 481.2 km 18 km
Standard Sensitivity Collection Mode

In the Standard Sensitivity collection mode, also referred to as 1x8 ms, Tanager will perform back nodding to slow down its ground scan rate just enough to avoid pixel stretching. The goal of this mode is to maximize the swath length while still maintaining square pixels.

Maximum Sensitivity Collection Mode

In the Maximum Sensitivity collection mode, also referred to as 4x8 ms, Tanager will perform much faster back nodding to drastically slow down its ground scan rate to achieve the highest possible SNR, allowing each point to be “seen” for a ~4 times longer duration. Because of this, the effective frame rate that is achieved is equivalent to 32 ms of exposure. The goal of this variant is to optimize SNR while remaining within the envelope of Tanager’s agility and orbital geometry.

Tanager Imagery Products

Tanager Chunking Strategy

A single Tanager collection can be quite long and in order to process and deliver Tanager assets with more reasonable file sizes, the collections will be dynamically chunked. Each collect will be separated into TanagerScenes based on a set of rules:

  1. A scene is at least 325 lines long.
  2. A scene is at most 750 lines long.
  3. The strategy seeks to produce square-ish scenes.
  4. All scenes within a single collection are approximately the same size. The worst case scenario is that a collect will have only two different scene sizes. These two sizes will vary by only a single line.

Example chunking below. Each | BOX | is a TanagerScene.

1700 line collect: | 0:567 | 567:1134 | 1134:1700 |

2600 line collect: | 0:650 | 650:1300 | 1300:1950 | 1950:2600 |

Imagery Item Type

Tanager Imagery products are available as either individual Basic or Ortho Scenes and Radiance or Surface Reflectance. Tanager imagery utilizes a native file format of HDF5. These products can be obtained from the Planet APIs through the TanagerScene item type.

A Tanager Scene Product is an individual framed scene within a strip, captured by the satellite in its line-scan of the Earth.

Tanager Scene products have a swath width of 18km and the length is variable in size dependent on Planet chunking. They are represented in the Planet Platform as the TanagerScene item type.

Imagery Asset Types

Tanager Scene products are available for download in the form of imagery assets. Multiple asset types are made available for Scene and Collect products, each with differences in radiometric processing and/or rectification.

Asset Type availability varies by Item Type. You can find an overview of supported assets by item type here:

Asset-Type Description
Ortho Radiance Scene ortho_radiance_hdf5 Orthorectified, Top of atmosphere radiance (at sensor) calibrated, in HDF5 format.
Basic Radiance Scene basic_radiance_hdf5 Unorthorectified, Top of atmosphere radiance (at sensor) calibrated, in HDF5 format. Not projected to a cartographic projection.
Ortho Surface Reflectance Scene ortho_sr_hdf5 Orthorectified, atmospherically corrected surface reflectance product, in HDF5 format.
Basic Surface Reflectance Scene basic_sr_hdf5 Unorthorectified, atmospherically corrected surface reflectance product, in HDF5 format. Not projected to a cartographic projection.
Ortho Visual Scene ortho_visual Orthorectified red, green, blue (RGB) visual image with color-correction.
Ortho Beta Usable Data Mask (UDM) ortho_beta_udm Orthorectified usable data mask (in beta), in GeoTIFF format.
Basic Beta UDM basic_beta_udm Unorthorectified usable data mask (in beta), in GeoTIFF format.
Geolocation Array geolocation_array Longitudes and Latitudes in WGS84 of centers of pixels, in GeoTIFF format.
Recent Monthly Basemap recent_monthly_basemap RGB contextual baselayer from the most recent PlanetScope Global Monthly Basemap, in GeoTIFF format.

You can find our complete Imagery Product Specification here.

Tanager Methane Products

Methane Item Type

Each TanagerMethane item will have the plumes that are able to be detected by a human operator within a corresponding TanagerScene based on a methane enhancement data layer, derived from a matched filter (doi:10.5194/amt-8-4383-2015).

TanagerMethane products are only offered in their orthorectified format. These products can be obtained from the Planet APIs through the TanagerScene item type.

A Tanager Methane Product is representative of a single plume and will have the dimensions of the plume extent. They are represented in the Planet Platform as the TanagerMethane item type.

Methane Asset Types

Tanager Methane products are available for download in the form of imagery and geographic feature assets. Multiple asset types are made available for Methane products, each with differences in publication latency.

Asset Type availability varies by Item Type. You can find an overview of supported assets by item type here:

Asset-Type Description
Ortho Methane QuickLook Plume ortho_ql_ch4 Preliminary 8-bit scaled plume intensity in (ppm-m) parts-per-million-meter, in GeoTIFF format. The image will contain an alpha channel indicating pixels with no plume detections.
Methane QuickLook Plume Metadata ql_ch4_json Preliminary plume locations, length, size in kg/hr and confidence measure indicating the level of interpretation certainty, in GeoJSON format.

Product Naming

The name Tanager image and methane product is designed to be unique and allow for easier recognition and sorting. The name of each downloaded image product is composed of the following elements:

TanagerScene

<acquisition date>_<acquisition time>_<hundredths of a second>_<satellite_id>_<asset_type> .<extension> Example: 20241005_062757_84_4001_ortho_radiance_hdf5.h5 Searchable product in Data API is: TanagerScene 20241005_062757_84_4001

TanagerMethane

<strip_id> is equivalent to <acquisition date>_<acquisition time>_<hundredths of a second>_<satellite_id>

Example: 20241005_062752_00_4001_strip_A_ortho_ql_ch4.tif Searchable product in Data API is: TanagerMethane 20241005_062752_00_4001_strip_A

Processing

Several processing steps are applied to Tanager imagery to produce the set of data products available for download.

TANAGER PROCESSING STEPS
Step Description
Dark Subtraction Corrects for sensor bias and dark level to ensure that zero illumination corresponds to zero radiance. The correction is updated frequently by averaging dark frames acquired over the non-sunlit side of Earth.
Pedestal Correction Subtracts remaining residual error in the zero point after the dark frame is subtracted so that the numerical zero is equivalent to the radiometric zero. This residual is estimated by computing the median value of masked pixels located at the edges of the detector, which are physically blocked from external illumination.
Flat Field Correction Corrects relative differences in pixel sensitivities to match those in the optimal response area of the sensor. Flat fields are collected for each optical instrument in lab conditions prior to launch, and are routinely updated on-orbit during the satellite lifetime.
Bad Pixel Correction Fills in defective pixels on the detector following the method described in Chapman et al. (2019), which replaces pixels by linearly interpolating to the most similar spectrum within the frame.
Optical Scatter Correction Removes stray light artifacts from scatter in the optical elements to bring the spectral response function (SRF) towards a Gaussian distribution. These artifacts are modeled as concentric Gaussians convolved with the original spectrum, so correction involves deconvolving the stray response components from the spectrum with a method outlined in Thompson et al. (2018a).
Optical Ghost Correction Follows the correction approach of Zandbergen et al. (2020) to remove structured stray light artifacts (“ghosts”) that arise due to unwanted reflections within the optics. A ghost image is predicted for each frame and subsequently subtracted to remove the stray signal.
Absolute Radiometric Calibration Converts the observations from Digital Number (DN) values into physical radiance units (W/(m²srμm)).
Order Sorting Filter (OSF) Seam Correction Interpolates over the radiometrically suspect rows where the order sorting filter (OSF) seams are located.
Visual Product Processing Presents the imagery as natural color, as seen by the human eye. Only applied to the ortho_visual asset type.
Orthorectification The orthorectification process is a method to correct the geographic location of imagery. The orthorectification process depends on the accuracy of the reference imagery, the terrain model, satellite and sensor parameters. OneAtlas Airbus imagery is used as reference images during Tanager orthorectification and the terrain model used for the orthorectification process is derived from multiple sources (SRTM, Intermap, and other local elevation datasets) which are periodically updated. The orthorectification process consists of two key steps. The first step is a feature-based approach for coarse model refinement followed by area-based matching for fine model refinement. The algorithm provides an improved sensor model of the satellite state and sensor, allowing for more accurate georectification.
Atmospheric Correction Removes atmospheric effects and estimates surface reflectance. Per pixel surface reflectance values are calculated using the ISOFIT (v2.9.5) (Imaging Spectrometer Optimal FITting) python package. This uses an optimal estimation method for simultaneously solving for both the atmospheric composition and surface reflectance values using hyperspectral radiance imagery as the input.


We are continually working to improve our technical documentation and support. Please help by sharing your experience with us.

Send Us Your Feedback