UDM 2

About UDM2

Availability - Date Ranges

Usable Data Masks are available for 4-band Planetscope imagery back to August 2018 globally, and specific agricultural regions back to January 2018. A small percentage of PlanetScope 4-band imagery after August 2018 do not have the UDM2 asset. This is because of rectification and/or image processing failures.

New metadata fields

The addition of the UDM2 asset also introduced a number of new metadata fields for PSScene4Band datasets. Like existing fields, these new metadata fields may be used to construct filters for searching Planet imagery: learn more about searches and filtering here.

Field Type Value Range Description
clear_percent int [0-100] Percent of clear values in dataset. Clear values represents scene content areas (non-blackfilled*) that are deemed to be not impacted by cloud, haze, shadow and/or snow.
clear_confidence_percent int [0-100] percentage value: per-pixel algorithmic confidence in 'clear' classification
cloud_percent int [0-100] Percent of cloud values in dataset. Cloud values represent scene content areas (non-blackfilled) that contain opaque clouds which prevent reliable interpretation of the land cover content.
heavy_haze_percent int [0-100] Percent of heavy haze values in dataset. Heavy haze values represent scene content areas (non-blackfilled) that contain thin low altitude clouds, higher altitude cirrus clouds, soot and dust which allow fair recognition of land cover features, but not having reliable interpretation of the radiometry or surface reflectance.
light_haze_percent int [0-100] Percent of light haze values in dataset. Light haze values represent scene content areas (non-blackfilled) that contain thin low altitude clouds, higher altitude cirrus clouds, soot and dust which allow reliable recognition of land cover features, and have up to +/-10% uncertainty on commonly used indices (EVI and NDWI).
shadow_percent int [0-100] Percent of shadow values in dataset. Shadow values represent scene content areas (non-blackfilled) that are not fully exposed to the solar illumination as a result of atmospheric transmission losses due to cloud, haze, soot and dust, and therefore do not allow for reliable interpretation of the radiometry or surface reflectance.
snow_ice_percent int [0-100] Percent of snow and ice values in dataset. Snow_ice values represent scene content areas (non-blackfilled) that are hidden below snow and/or ice.
visible_percent int [0-100] Visible values represent the fraction of the scene content (excluding the portion of the image which contains blackfill) which is comprised of clear, light haze, shadow, snow/ice categories, and is given as a percentage ranging from zero to one hundred.
visible_confidence_percent int [0-100] Average of confidence percent for clear_percent, light_haze_percent, shadow_percent and snow_ice_percent

* Blackfilled content refers to empty regions of a scene file that have no value

UDM2 Map Deliverable

The new UDM2 asset is delivered as a multi-band GeoTIFF file, with the following bands and values:

UDM2 Bands

Band Description Pixel Value Range Interpretation
Band 1 Clear map [0, 1] 0: not clear, 1: clear
Band 2 Snow map [0, 1] 0: no snow or ice, 1: snow or ice
Band 3 Shadow map [0, 1] 0: no shadow, 1: shadow
Band 4 Light haze map [0, 1] 0: no light haze, 1: light haze
Band 5 Heavy haze map [0, 1] 0: no heavy haze, 1: heavy haze
Band 6 Cloud map [0, 1] 0: no cloud, 1: cloud
Band 7 Confidence map [0-100] percentage value: per-pixel algorithmic confidence in classification
Band 8 Unusable pixels -- Equivalent to the UDM asset: see Planet's Imagery Specification for complete details

UDM2 Classification Methodology

Planet’s UDM2 classification approach is based on supervised machine learning techniques that use observation data from Planet-sensors to train a classification model.

Planet collects a set of truth scenes that are used to train the model. The truth scenes consist of 4-band top-of-atmosphere-radiance (TOAR) images and their corresponding cloud masks (usable data masks). Each usable data mask is created by manually labeling the TOAR images to assign a UDM2 value to each pixel in the image.

To ensure that the truth scenes dataset are representative of the global catalog of Planet images, Planet draws from a diversity of satellites, scene content, seasonality, geography and cloud types to contribute to the truth scene curation and classification process.

The labeled cloud mask dataset is then used to train a machine learning model, which is based on convolutional neural networks. The model can then be applied to generate a usable data mask for any new input image. A portion of the labeled cloud mask dataset is not used for training and instead is passed into the completed model for validation. If particular areas perform poorly (e.g., light haze accuracy over urban scenes), additional scenes are added to the truth set to improve the classification accuracy.

Planet’s engineering team regularly reviews cloud masks and identifies new scenes to add to the truth scene training set. Additionally, customer reports of poorly performing scenes or regions are part of the feedback process to improve the truth training set.