PlanetScope, operated by Planet, is a constellation of approximately 130 satellites, able to image the entire land surface of the Earth every day (a daily collection capacity of 200 million km²/day). PlanetScope images are approximately 3 meters per pixel resolution.
Constellation & Sensor Overview
The PlanetScope satellite constellation consists of multiple launches ("flocks") of Dove satellites. On-orbit capacity is constantly improving in capability and quantity, with technology improvements deployed at a rapid pace. Each satellite is a CubeSat 3U form factor (10 cm by 10 cm by 30 cm).
Since our first launch in 2016, we have released three PlanetScope instrument types:
|Instrument Name||Instrument Id||Description|
||Built with a telescope we call "PS2", this instrument captures red, green, blue, and near infrared channels. It produces Scene products which are approximately 25.0 x 11.5 sq km.|
||Built with the same "PS2" telescope, but with updated Bayer pattern and pass-band filters, this instrument captures red, green, blue, and near infrared channels. It produces Scene products which are approximately 25.0 x 23.0 sq km.|
||Built with the "PSB" and the same filter response as PS2.SD instrument, this instrument captures red, green, blue, near infrared, as well as a new red edge channel. It produces Scene products which are approximately 32.5 x 19.6 sq km.|
You can read a more detailed overview of our PlanetScope Constellation and Sensors here.
PlanetScope Products are available for search and download via Planet’s APIs, User Interfaces, and Integrations, in the form of Scene and OrthoTile products, which are available through in our platform as a set of Item Types and Asset Types.
A PlanetScope Scene Product is an individual framed scene within a strip, captured by the satellite in its continuous line-scan of the Earth. Scenes within a strip are overlapping and not organized to any particular tiling grid system.
PlanetScope Scene products range from approximately 280 to 630 square kilometers in size, depending on which instrument type captured them. They are represented in the Planet Platform as
PSScene4Band item types (serving 3- and 4-band data products respectively).
A PlanetScope OrthoTile Product is a 25 x 25 sq km orthorectified and tiled product generated from a set of consecutive scenes within a strip (usually 4 or 5), based on a worldwide, fixed UTM grid system. They are represented in the Planet Platform as the
PSOrthoTile item type.
PlanetScope Scene and OrthoTile imagery products are available for download in the form of imagery assets. Multiple asset types are made available for Scene and OrthoTile products, each with differences in radiometric processing and/or rectification.
Asset Type availability varies by Item Type. You can find an overview of supported assets by item type here:
Basic Analytic (
basic_analytic) assets are non-orthorectified, calibrated, multispectral imagery products that have been corrected for sensor artifacts and transformed to Top of Atmosphere (at-sensor) radiance. These products are designed for data science and analytic applications, and for users who wish to geometrically correct the data themselves with the associated rational polynomial coefficients (RPCs) asset type.
analytic) assets are orthorectified, calibrated, multispectral imagery products that have been corrected for sensor artifacts and terrain distortions, and transformed to Top of Atmosphere (at-sensor) radiance. These products are designed for data science and analytic applications which require imagery with accurate geolocation and cartographic projection. Note: The
PSOrthoTile item type also supports a 5-band Analytic asset type (
analytic_5b) for items captured by SuperDove satellites.
visual) assets are orthorectified, color-corrected, RGB imagery products that are optimized for the human eye, providing images as they would look if viewed from the perspective of the satellite. These products are designed for simple and direct visual inspection, and can be used and ingested directly into a Geographic Information System or application.
Surface Reflectance (
analytic_sr) assets are orthorectified and radiometrically corrected to ensure consistency across localized atmospheric conditions, and to minimize uncertainty in spectral response across time and location. These multispectral imagery products are designed for temporal analysis and monitoring applications, especially in agriculture and forestry sectors. Note: Surface Reflectance asset types take longer to generate that our other PlanetScope products; they are typically available 8-12 hours after an item is published to our catalog.
You can find our complete Imagery Product Specification here.
The name of each acquired PlanetScope image is designed to be unique and allow for easier recognition and sorting of the imagery. It includes the date and time of capture, as well as the id of the satellite that captured it. The name of each downloaded image product is composed of the following elements:
<acquisition date>_<acquisition time>_<acquisition time seconds hundredths>_<satellite_id>_<productLevel>_<bandProduct>.<extension>
Several processing steps are applied to PlanetScope imagery to produce the set of data products available for download.
Click for full-size image
Sensor & Radiometric Calibration
Darkfield/Offset Correction: Corrects for sensor bias and dark noise. Master offset tables are created by averaging on-orbit darkfield collects across 5-10 degree temperature bins and applied to scenes during processing based on the CCD temperature at acquisition time.
Flat Field Correction: Flat fields are collected for each optical instrument prior to launch. These fields are used to correct image lighting and CCD element effects to match the optimal response area of the sensor. Flat fields are routinely updated on-orbit during the satellite lifetime.
Camera Acquisition Parameter Correction: Determines a common radiometric response for each image (regardless of exposure time, number of TDI stages, gain, camera temperature and other camera parameters).
Absolute Calibration: As a last step, the spatially and temporally adjusted datasets are transformed from digital number values into physical based radiance values (scaled to W/(m²strμm)*100).
Removes terrain distortions. This process consists of two steps:
- The rectification tiedown process wherein tie points are identified across the source images and a collection of reference images (ALOS, NAIP, Landsat) and RPCs are generated.
- The actual orthorectification of the scenes using the RPCs, to remove terrain distortions. The terrain model used for the orthorectification process is derived from multiple sources (Intermap, NED, SRTM and other local elevation datasets) which are periodically updated. Snapshots of the elevation datasets used are archived (helps in identifying the DEM that was used for any given scene at any given point).
Visual Product Processing
Presents the imagery as natural color, optimized as seen by the human eye. This process consists of three steps:
- Nominalization - Sun angle correction, to account for differences in latitude and time of acquisition. This makes the imagery appear to look like it was acquired at the same sun angle by converting the exposure time to the nominal time (noon).
- Unsharp mask (sharpening filter) applied before the warp process.
- Custom color curve applied post warping.
Surface Reflectance Product Processing
Removes atmospheric effects. This process consists of three steps:
- Top of Atmosphere (TOA) reflectance calculation using coefficients supplied with the at-sensor radiance product.
- Lookup table (LUT) generation using the 6SV2.1 radiative transfer code and MODIS near-real-time data inputs.
- Conversion of TOA reflectance to surface reflectance for all combinations of selected ranges of physical conditions and for each satellite sensor type using its individual spectral response as well as estimates of the state of the atmosphere.
You can find a detailed white paper on our Surface Reflectance Products here.
A note regarding imagery collection vs publication
While the PlanetScope constellation collects imagery of nearly all the landmass on Earth at a daily cadence, imagery must pass all quality thresholds for publication in our catalog. There are a few common reasons for non-publication of imagery or for the publication of test quality imagery in lieu of standard imagery.
Non-publication due to weather
Unpublished imagery is almost always due to weather events. Heavy cloud cover prevents our pipelines from rectifying imagery correctly and achieving ground lock. Unrectified imagery is only published in special circumstances.
If you notice gaps in image publication in your area, it’s very likely due to clouds, as weather conditions vary tremendously region to region. To get a sense of how many days to expect imagery to be published in a given season and region, you can look up average cloud cover on sites like weatherspark.com.
Non-publication due to lack of ground lock
We choose not to publish images that don’t have groundlock. About 95% of the time lack of groundlock is due to clouds (see above). However other conditions, such as extreme latitudes, topography and open water can also impact our ability to reference images to the ground. Non-groundlocked imagery only has approximate geo-location; therefore, we recommend only using imagery that has achieved groundlock to prevent user dis-orientation or analysis errors..
Very occasionally a test quality image without groundlock will achieve groundlock and be re-categorized after 24-72 hours. However, it is more common that ground control points get refined post-publication. Normally this happens within the first 24 hours an image appears in our catalogue.
Non-publication due to image quality
A small percentage of images collected cannot be fully processed due to anomalies in image capture, atmospheric conditions, or other factors. To maintain quality standards we do not publish anything that cannot be processed to a final composited image.
Image quality: Standard vs Test imagery
Planet’s data API provides metadata in regards to image quality. This falls into two categories: standard and test. The vast majority of published imagery falls into the “standard” category, meaning that it’s passed all Planet’s image quality metrics in the processing pipeline.
The remaining published imagery falls into the “test” category. The majority test of quality images are categorized this way due to lack of ground lock (see above). For most use cases we recommend using standard quality imagery.
You can find whether an image is standard or test quality within the scene metadata in Explorer.
Access scene meta data via the info icon
View scene meta data in the table on the left, look for “Quality category”