SkySat

last updated: February 05, 2024

SkySat, operated by Planet, is a high resolution constellation of 21 satellites, able to image revisit any location on Earth up to 10x daily (a daily collection capacity of 400 thousand km²/day). SkySat images are approximately 50 centimeters per pixel resolution.

Constellation & Sensor Overview

The SkySat satellite constellation consists of multiple launches of our SkySat-C generation satellites, first launched in 2016.

Each satellite is 3-axis stabilized and agile enough to slew between different targets of interest. Each satellite has four thrusters for orbital control, along with four reaction wheels and three magnetic torquers for attitude control.

All SkySats contain Cassegrain telescopes with a focal length of 3.6m, with three 5.5 megapixel CMOS imaging detectors making up the focal plane.

SkySat Imagery Products

Item Types

SkySat Products are available for search and download via Planet’s APIs, User Interfaces, and Integrations, in the form of Scene, Collect, and Video products, which are encoded in our platform as a set of Item Types and Asset Types.

A SkySat Scene Product is an individual framed scene within a strip, captured by the satellite in its line-scan of the Earth. SkySat Satellites have three cameras per satellite, which capture three overlapping strips. Each of these strips contain overlapping scenes, not organized to any particular tiling grid system.

SkySat Scene products are approximately 1 x 2.5 square kilometers in size. They are represented in the Planet Platform as the SkySatScene item type.

A SkySat Collect Product is created by composing roughly 60 SkySat Scenes along an imaging strip into an orthorectified segment, approximately 20 x 5.9 square kilometers in size. They are represented in the Planet Platform as the SkySatCollect item type. This product may be easier to handle, if you're looking at larger areas of interest with SkySat imagery. Due to the image rectification process involved in creating this product, Collect is generally recommended over the Scene product when the AOI spans multiple scenes, particularly if a mosaic or composite image of the individual scenes is required. Collect performs necessary rectification steps automatically. This is especially useful for users who don't feel comfortable doing orthorectification manually.

A SkySat Video Product is a full motion video are collected between 30 and 120 seconds by a single camera from any of the SkySats. Its size is comparable to a SkySat Scene, about 1 x 2.5 square kilometers. They are represented in the Planet Platform as the SkySatVideo item type.

Imagery Asset Types

SkySat Scene and Collect products are available for download in the form of imagery assets. Multiple asset types are made available for Scene and Collect products, each with differences in radiometric processing and/or rectification.

Asset Type availability varies by Item Type. You can find an overview of supported assets by item type here:

Basic Analytic (basic_analytic) assets are non-orthorectified, calibrated, multispectral imagery products with native sensor resolution (0.72-0.81m), that have been transformed to Top of Atmosphere (at-sensor) radiance. These products are designed for data science and analytic applications, and for users who wish to geometrically correct the data themselves with associated rational polynomial coefficients (RPCs) assets (ground control applied).

Basic L1A Panchromatic (basic_l1a_panchromatic_dn) assets are non-orthorectified, uncalibrated, panchromatic-only imagery products with native sensor resolution (0.72-0.81m), that have been made available roughly two hours before all other SkySat asset types are available in the catalog. These products are designed for time-sensitive, low-latency monitoring applications, and can be geometrically corrected with associated rational polynomial coefficients (RPCs) assets (derived from satellite telemetry).

Basic Panchromatic (basic_panchromatic) assets are non-orthorectified, calibrated, super-resolved (0.65m), panchromatic-only imagery products that have been transformed to Top of Atmosphere (at-sensor) radiance. These products are designed for data science and analytic applications which depend on a wider spectral range (Pan: 450 - 900 nm), and for users who wish to geometrically correct the data themselves with associated rational polynomial coefficients (RPCs) assets (ground control applied).

Ortho Analytic (ortho_analytic) assets are orthorectified, calibrated, multispectral imagery products with native sensor resolution (0.72-0.81m), that have been transformed to Top of Atmosphere (at-sensor) radiance. These products are designed for data science and analytic applications which require imagery with accurate geolocation and cartographic projection.

Ortho Analytic Surface Reflectance (ortho_analytic_sr) assets are corrected for the effects of the Earth’s atmosphere, accounting for the molecular composition and variation with altitude along with aerosol content. Combining the use of standard atmospheric models with the use of MODIS water vapor, ozone and aerosol data, this provides reliable and consistent surface reflectance scenes over Planet’s varied constellation of satellites as part of our normal, on-demand data pipeline.

Ortho Panchromatic (ortho_panchromatic) assets are orthorectified, calibrated, super-resolved (0.50m), panchromatic-only imagery products that have been transformed to Top of Atmosphere (at-sensor) radiance. These products are designed for data science and analytic applications which require a wider spectral range (Pan: 450 - 900 nm), highest available resolution, and accurate geolocation and cartographic projection.

Ortho Visual (ortho_visual) assets are orthorectified, color-corrected, super-resolved (0.50m), RGB imagery products that are optimized for the human eye, providing images as they would look if viewed from the perspective of the satellite. Lower resolution multispectral bands are sharpened by the super-resolved panchromatic band. These products are designed for simple and direct visual inspection, and can be used and ingested directly into a Geographic Information System or application.

Ortho Pansharpened (ortho_pansharpened) assets are orthorectified, uncalibrated, super-resolved (0.50m) multispectral imagery products. Lower resolution multispectral bands are sharpened to match the resolution of the super-resolved panchromatic band. These products are designed for multispectral applications which require highest available resolution and accurate geolocation and cartographic projection.

You can find our complete Imagery Product Specification here.

Video Asset Types

SkySat Video products are available for download in the form of video assets. You can find an overview of supported assets by SkySatVideo item type here.

Video File (video_file) assets are video mp4 files, produced with Basic L1a Panchromatic scene assets captured as part of the full-motion video.

Video Frames (video_frames) assets are compressed folders which include all of the frames used to create the Video File, packaged as Basic L1a Panchromatic scene assets with accompanying rational polynomial coefficients (RPCs). These products are designed primarily for customers interested in using video frames for 3D reconstruction.

Product Naming

The name of each acquired SkySat image is designed to be unique and allow for easier recognition and sorting of the imagery. It includes the date and time of capture, as well as the satellite id that captured it. The name of each downloaded image product is composed of the following elements:

SkySatScene

<acquisition date>_<acquisition time>_<satellite_id><camera_id>_<frame_id>_<bandProduct>.<extension>

Example: 20200814_162132_ssc4d3_0021_analytic.tif

SkySat Collect

<acquisition date>_<acquisition time>_<satellite_id>_<frame_id>_<bandProduct>.<extension>

Example: 20200815_091045_ssc6_u0002_visual.tif

SkySat Video

<acquisition date>_<acquisition time>_<satellite_id><camera_id>_video.mp4

Example: 20200808_133717_ssc3d1_video.mp4

Processing

Several processing steps are applied to SkySat imagery to produce the set of data products available for download.

SkySat Image Processing Chain illustration

Click for full-size image

Sensor & Radiometric Calibration

Darkfield/Offset Correction: Corrects for sensor bias and dark noise. Master offset tables are created by averaging on-orbit darkfield collects across 5-10 degree temperature bins and applied to scenes during processing based on the CCD temperature at acquisition time.

Flat Field Correction: Flat fields are collected for each optical instrument prior to launch. These fields are used to correct image lighting and CCD element effects to match the optimal response area of the sensor. Flat fields are routinely updated on-orbit during the satellite lifetime.

Camera Acquisition Parameter Correction: Determines a common radiometric response for each image (regardless of exposure time, number of TDI stages, gain, camera temperature and other camera parameters).

Inter-Sensor Radiometric Response (Intra-Camera): Cross calibrates the 3 sensors in each camera to a common relative radiometric response. The offsets between each sensor is derived using on-orbit cloud flats and the overlap regions between sensors on SkySat spacecraft.

Super Resolution (Level 1B Processing): Super resolution is the process of creating an improved resolution image by fusing information from low resolution images, with the created higher resolution image being a better description of the scene.

Orthorectification

Removes terrain distortions. This process consists of two steps:

  1. The rectification tiedown process wherein tie points are identified across the source images and a collection of reference images (ALOS, NAIP, Landsat) and RPCs are generated.
  2. The actual orthorectification of the scenes using the RPCs, to remove terrain distortions. The terrain model used for the orthorectification process is derived from multiple sources (Intermap, NED, SRTM and other local elevation datasets) which are periodically updated. Snapshots of the elevation datasets used are archived (helps in identifying the DEM that was used for any given scene at any given point).

Visual Product Processing

Presents the imagery as natural color, optimized as seen by the human eye. This process consists of three steps:

  1. Nominalization - Sun angle correction, to account for differences in latitude and time of acquisition. This makes the imagery appear to look like it was acquired at the same sun angle by converting the exposure time to the nominal time (noon).
  2. Unsharp mask (sharpening filter) applied before the warp process.
  3. Custom color curve applied post warping.

Rate this guide: