June 14 2021

Overview

In this tutorial we are going to use Planet imagery to calculate an approximate percentage change in reservoir water levels.

You will need:

To export your Planet API Key to your environment:

export PLANET_API_KEY=a3a64774d30c4749826b6be445489d3b # (not a real key)

You can find GDAL installation instructions here. For installation of scipy & numpy, see this page.

To install the Python libraries, do:

pip install retrying
pip install requests
pip install bokeh
pip install simplecv

# optionally:
pip install planet

Once installation is complete, you're ready to start with Step 1 below.

A note about Planet Imagery Product Compatability: this tutorial is compatable with all ItemTypes & Asset Types.

Download images at two time points

First, let's download images of the same Reservoir in California, 2 weeks apart.

You can use the Data API to search for, activate & download these images, or optionally you can use the planet CLI.

To use the CLI, do:

$ planet data download --item-type REOrthoTile --asset-type visual --string-in id 20160707_195146_1057917_RapidEye-1
$ planet data download --item-type REOrthoTile --asset-type visual --string-in id 20160722_194930_1057917_RapidEye-2

You now have the two Planet 'visual' GeoTIFF format images in your current directory.

N.B.: As this is just an example, we are using Planet's 'visual' asset type. If we wanted a more accurate measurement we would use the higher bit-depth 'analytic' product.

Select common window to compare

Our scenes don't overlap perfectly, and for the calculation to be accurate we'd prefer they did. The GDAL Warp command enables us to do this crop.

With QGIS we can find the overlapping rectangle between the two scenes. Move your mouse to where you estimate the corners might be, and take note of the numbers from the 'coordinates' box on the bottom menu bar.

We then run the following bash commands:

gdalwarp -te 547500 4511500 556702 4527000 20160707_195146_1057917_RapidEye-1_visual.tif 20160707.tif
gdalwarp -te 547500 4511500 556702 4527000 20160722_194930_1057917_RapidEye-2_visual.tif 20160722.tif

Find the Water

Finding the water is easy, it's blue. We can use HUE. The hue distance transform turns all the water white. We also shrink the images, as we don't need the extra resolution for this simple analysis.

import SimpleCV as scv
a = scv.Image("20160707.tif")
b = scv.Image("20160722.tif")

innerA =  a.hueDistance(175)
innerB =  b.hueDistance(171)
innerA.scale(0.4).rotateLeft().save("hue_a.png")
innerB.scale(0.4).rotateLeft().save("hue_b.png")

Threshold the Hue Transform

Next we apply a threshold to the image based upon hue, to create a mask. The hue distance transform gives us the following images, which are base masks:

innerA =  a.hueDistance(175).threshold(200)
innerB =  b.hueDistance(171).threshold(200)
innerA.scale(0.4).rotateLeft().save("hue_a_threshold.png")
innerB.scale(0.4).rotateLeft().save("hue_b_threshold.png")

Clean up the Masks

We then use morphological operations to clean up these masks. We will call this our "inner mask." We are pretty sure now that areas labeled in white are water.

innerA =  a.hueDistance(175).threshold(200).erode(2).dilate(3).erode(2)
innerB =  b.hueDistance(171).threshold(200).erode(2).dilate(3).erode(2)
innerA.scale(0.4).rotateLeft().save("hue_a_morph.png")
innerB.scale(0.4).rotateLeft().save("hue_b_morph.png")

Find regions we're not sure about

Now we want to find regions we are less sure about. We will try to label these regions as gray.

To do this we will grow our mask, then invert it.

# we use /2 to change white into gray
middleA =  a.hueDistance(175).threshold(200).erode(1).dilate(2).invert()/2
middleB =  b.hueDistance(171).threshold(200).erode(1).dilate(2).invert()/2
middleA.scale(0.4).rotateLeft().save("middle_a.png")
middleB.scale(0.4).rotateLeft().save("middle_b.png")

Create the watershed mask

Now we put it all together. Unlike a normal mask, a watershed mask has three parts: * White -- definitely our object. * Gray -- definitely not our object. * Black -- unsure / boundary.

The watershed algorithm will pick the optimal point between the gray and white boundaries. Because gray pixel values are a higher number than black ones, summing them gives us what we want, as long as the white area is smaller than the black area.

innerA =  a.hueDistance(175).threshold(190).erode(1).dilate(1).erode(1)
innerB =  b.hueDistance(171).threshold(200).erode(1).dilate(1).erode(1)
middleA =  a.hueDistance(175).threshold(190).erode(1).dilate(3).invert()/2
middleB =  b.hueDistance(171).threshold(200).erode(1).dilate(3).invert()/2
a_mask = innerA+middleA
b_mask = innerB+middleB
a_mask.scale(0.4).rotateLeft().save("mask_a.png")
b_mask.scale(0.4).rotateLeft().save("mask_b.png")

Run the watershed algorithm

aBlobs = a.findBlobsFromWatershed(mask=a_mask)
bBlobs = b.findBlobsFromWatershed(mask=b_mask)
aBlobs.draw(color=scv.Color.RED,width=-1,alpha=128)
a = a.applyLayers()
a.scale(0.4).rotateLeft().save("ws_a.png")
bBlobs.draw(color=scv.Color.HOTPINK,width=-1,alpha=128)
b = b.applyLayers()
b.scale(0.4).rotateLeft().save("ws_b.png")

Calculate the water-level change

import numpy as np

# best support is with data in a format that is table-like
aa =  np.sum(aBlobs.area())
bb =  np.sum(bBlobs.area())
print aa
print bb
print bb/float(aa)
print "Percent change {0}%".format(((bb/float(aa))-1)*100)

The output you get will look like this:

1414665.5
1357179.0
0.95936389203
Percent change -4.06361079704%

Finally, we can also plot these data in a bar chart using BokehJS:

from bokeh.charts import Bar
from bokeh.io import output_file, show
import bokeh.plotting as bk

data = {
    'date': ['2016-07-13', '2016-09-10'],
    'pixels': [aa, bb]}
bar2 = Bar(data, values='pixels', label=['date'], title="Reservoir pixels", width=800)

# Saves chart as waterlevel.html
bk.save(bar2)

Questions or comments about this guide? Join the conversation at Planet Community.

June 14 2021

Normalized Difference Vegetation Index (NDVI), developed by a NASA scientist named Compton Tucker in 1977, is commonly used to assess whether an area contains live green vegetation or not. It can show the difference between water and plants, bare soil and grass, whether plants are under stress, and what lifecycle stage a crop is in.

It compares how much more near-infrared light is reflected by chlorophyll vs visible red light:

In this guide we will perform a basic NDVI calculation in python, using Planet's 4-Band imagery.

You will need,

To export your Planet API Key to your environment:

export PLANET_API_KEY=a3a64774d30c4749826b6be445489d3b # (not a real key)

You can find Numpy installation instructions here.

To install the Python libraries, do:

pip install rasterio
pip install matplotlib
pip install retrying
pip install requests

Once installation is complete, you're ready to start with Step 1 below.

A note about Planet Imagery Product Compatability: this tutorial is compatable with only analytic and basic_analytic asset types of the following ItemTypes:

  • PSOrthoTile
  • REOrthoTile
  • PSScene4Band

Download a 4-Band image

First, download a 4-band Planetscope image of agricultural land in Toledo, Spain (id: 20161218_101700_0e0d). You can do this using the Planet API, or with Planet Explorer, by filtering for '4 Band PlanetScope scene' (PSScene4Band) or 'Planetscope ortho tile' (PSOrthoTile), and downloading an 'analytic' asset.

Using the Planet CLI:

planet data download --item-type PSScene4Band --asset-type analytic,analytic_xml --string-in id 20161218_101700_0e0d

You now have a file called '20161228_101647_0e26_3B_AnalyticMS.tif' in your directory.


Thumbnail preview of the 4-band scene

Extract the Visible Red and NIR bands

In a python script, open the image and extract just the Red and Near-infrared bands, numbers 3 and 4.

import rasterio
import numpy

image_file = "20161228_101647_0e26_3B_AnalyticMS.tif"

# Load red and NIR bands - note all PlanetScope 4-band images have band order BGRN
with rasterio.open(image_file) as src:
    band_red = src.read(3)

with rasterio.open(image_file) as src:
    band_nir = src.read(4)

Normalize to Top of Atmosphere Reflectance

Converting the pixel values to TOA Reflectance makes the analysis more accurate, and comparable with other scenes. Load the TOA Reflectance coefficients from the metadata XML asset.

from xml.dom import minidom

xmldoc = minidom.parse("20161218_101700_0e0d_3B_AnalyticMS_metadata.xml")
nodes = xmldoc.getElementsByTagName("ps:bandSpecificMetadata")

# XML parser refers to bands by numbers 1-4
coeffs = {}
for node in nodes:
    bn = node.getElementsByTagName("ps:bandNumber")[0].firstChild.data
    if bn in ['1', '2', '3', '4']:
        i = int(bn)
        value = node.getElementsByTagName("ps:reflectanceCoefficient")[0].firstChild.data
        coeffs[i] = float(value)

Multiply the band values by the TOA Reflectance coefficients.

# Multiply by corresponding coefficients
band_red = band_red * coeffs[3]
band_nir = band_nir * coeffs[4]

Perform the NDVI calculation

Next we perform the NDVI calculation through subtraction and division of the pixel values.

# Allow division by zero
numpy.seterr(divide='ignore', invalid='ignore')

# Calculate NDVI
ndvi = (band_nir.astype(float) - band_red.astype(float)) / (band_nir + band_red)

Save the NDVI image

Finally we output these new pixel values to a new image file, making sure we mirror the GeoTIFF spatial metadata:

# Set spatial characteristics of the output object to mirror the input
kwargs = src.meta
kwargs.update(
    dtype=rasterio.float32,
    count = 1)

# Create the file
with rasterio.open('ndvi.tif', 'w', **kwargs) as dst:
        dst.write_band(1, ndvi.astype(rasterio.float32))

Apply a color map

Applying a color map can help to visually distinguish vegetation. For more color map options check out the docs for matplotlib.

import matplotlib.pyplot as plt
plt.imsave("ndvi_cmap.png", ndvi, cmap=plt.cm.summer)

Questions or comments about this guide? Join the conversation at Planet Community.

June 14 2021

Planet Explorer and the Planet Data API accept geometry in GeoJSON format. This guide explains some of the basics of working with GeoJSON.

Creating Geometry

We often use the tool GeoJSON.io to create geometry.

The GeoJSON.io GUI

You can search by place name.

Then graphically draw a polygon:

And export the GeoJSON, to upload to Planet Explorer, or use in your Data API query.

Simplifying Geometry

A common trap is using highly complex geometry. Querying the Data API with complex geometry can significantly increase search response times, and Planet Explorer limits you to a 1MB file.

MapShaper.org is a tool that 'simplifies' your geometry to get around these issues.

Simply upload your GeoJSON file, click 'Simplify' in the top right corner -> Apply -> move the slider to until the number of vertices has reduced, but the shape is sufficiently maintained.

Export the new simplified geometry, to use it to search the Planet archive again.

Converting Shapefile to GeoJSON

Compress your .shp and all associated files into a .zip file. Drag this .zip file onto the mapshaper.org homepage. Click 'Import', then 'Export' in the top right corner, and select 'GeoJSON'.

Planet GeoJSON Specifications

Planet has the following GeoJSON requirements:

  • EPSG:4326 Projection
  • No self-intersections

Planet supports the following GeoJSON specifications:

Questions or comments about this guide? Join the conversation at Planet Community.

June 14 2021

THIS FUNCTIONALITY HAS BEEN DEPRECATED

the following guide is for informational purposes only, and does not reflect the current state of Planet's platform.

Overview

This guide explains how to download a subarea of a Planet image, saving time, bandwidth and storage. We will use an exciting new feature of the GDAL library, VSI Curl, which queries a part of a GeoTIFF file using an HTTP Range request.

You will need,

To export your Planet API Key to your environment:

export PLANET_API_KEY=a3a64774d30c4749826b6be445489d3b # (not a real key)

You can find GDAL installation instructions here.

To install the Python libraries, do:

pip install retrying
pip install requests

Once installation is complete, you're ready to start with Step 1 below.

A note about Planet Imagery Product Compatability: this tutorial is compatable with all ItemTypes & Asset Types.

Activate a scene containing your Subarea

First, we will activate a scene of a stretch of agricultural land in Yuma, AZ, with the Planet API client.

planet data download --activate-only --item-type PSScene3Band --asset-type visual --string-in id 20161109_173041_0e0e --quiet

Activation will take a few minutes. Run the command above again until you see a URL in the 'location' element of the response.

A full-size Planet Scene of Agricultural land in Yuma, AZ

Create a GeoJSON file of your Subarea

Define the geometry of the subarea you'd like to download in GeoJSON format, by drawing it at geojson.io:

Save the GeoJSON to a file e.g. subarea.geojson,

{
  "type": "FeatureCollection",
  "features": [
    {
      "type": "Feature",
      "properties": {},
      "geometry": {
        "type":"Polygon",
        "coordinates":[
          [
             [
                -114.176423930378903,
                32.691933554346988
             ],
             [
                -114.176423930378903,
                32.685249521402788
             ],
             [
                -114.168019741897595,
                32.685249521402788
             ],
             [
                -114.168019741897595,
                32.691933554346988
             ],
             [
                -114.176423930378903,
                32.691933554346988
             ]
          ]
        ]
      }
    }
  ]
}

Download the Subarea

Next let's request a download URL for this image from the Planet Data API. Note, you must do this every time you wish to download your subarea, as these download URLs expire after 1 hour.

from osgeo import gdal
from requests.auth import HTTPBasicAuth
import os
import requests

item_id = "20161109_173041_0e0e"
item_type = "PSScene3Band"
asset_type = "visual"
item_url = 'https://api.planet.com/data/v1/item-types/{}/items/{}/assets'.format(item_type, item_id)

# Request a new download URL
result = requests.get(item_url, auth=HTTPBasicAuth(os.environ['PLANET_API_KEY'], ''))
download_url = result.json()[asset_type]['location']

Prefix the download URL with '/vsicurl/' to use the VSI Curl driver. Then call GDAL Warp, passing it an output file name, this URL, our subarea GeoJSON, a projection (in this case, the one used by GeoJSON.io), and 'cropToCutline = True'.

vsicurl_url = '/vsicurl/' + download_url
output_file = item_id + '_subarea.tif'

# GDAL Warp crops the image by our AOI, and saves it
gdal.Warp(output_file, vsicurl_url, dstSRS = 'EPSG:4326', cutlineDSName = 'subarea.geojson', cropToCutline = True)

Within a couple of seconds, you'll have an image of just the subarea in your directory.


Questions or comments about this guide? Join the conversation at Planet Community.

December 01 2001

Getting started with the Data API

Let's search & download some imagery of farmland near Stockton, CA. Here are the steps we'll follow:

  1. Define an Area of Interest (AOI)
  2. Save our AOI's coordinates to GeoJSON format
  3. Create a few search filters
  4. Search for imagery using those filters
  5. Activate an image for downloading
  6. Download an image

Requirements

Define an Area of Interest

An Area of Interest (or AOI) is how we define the geographic "window" out of which we want to get data.

For the Data API, this could be a simple bounding box with four corners, or a more complex shape, as long as the definition is in GeoJSON format.

For this example, let's just use a simple box. To make it easy, I'll use geojson.io to quickly draw a shape & generate GeoJSON output for our box:

geojsonio.png

We only need the "geometry" object for our Data API request:

In [ ]:
# Stockton, CA bounding box (created via geojson.io) 
geojson_geometry = {
  "type": "Polygon",
  "coordinates": [
    [ 
      [-121.59290313720705, 37.93444993515032],
      [-121.27017974853516, 37.93444993515032],
      [-121.27017974853516, 38.065932950547484],
      [-121.59290313720705, 38.065932950547484],
      [-121.59290313720705, 37.93444993515032]
    ]
  ]
}

Create Filters

Now let's set up some filters to further constrain our Data API search:

In [ ]:
# get images that overlap with our AOI 
geometry_filter = {
  "type": "GeometryFilter",
  "field_name": "geometry",
  "config": geojson_geometry
}

# get images acquired within a date range
date_range_filter = {
  "type": "DateRangeFilter",
  "field_name": "acquired",
  "config": {
    "gte": "2016-08-31T00:00:00.000Z",
    "lte": "2016-09-01T00:00:00.000Z"
  }
}

# only get images which have <50% cloud coverage
cloud_cover_filter = {
  "type": "RangeFilter",
  "field_name": "cloud_cover",
  "config": {
    "lte": 0.5
  }
}

# combine our geo, date, cloud filters
combined_filter = {
  "type": "AndFilter",
  "config": [geometry_filter, date_range_filter, cloud_cover_filter]
}

Searching: Items and Assets

Planet's products are categorized as items and assets: an item is a single picture taken by a satellite at a certain time. Items have multiple asset types including the image in different formats, along with supporting metadata files.

For this demonstration, let's get a satellite image that is best suited for visual applications; e.g., basemaps or visual analysis. Since we're not doing any spectral analysis outside of the visual range, we only need a 3-band (RGB) image. To get the image we want, we will specify an item type of PSScene3Band, and asset type visual.

You can learn more about item & asset types in Planet's Data API here.

Now let's search for all the items that match our filters:

In [ ]:
import os
import json
import requests
from requests.auth import HTTPBasicAuth

# API Key stored as an env variable
PLANET_API_KEY = os.getenv('PL_API_KEY')

item_type = "PSScene3Band"

# API request object
search_request = {
  "interval": "day",
  "item_types": [item_type], 
  "filter": combined_filter
}

# fire off the POST request
search_result = \
  requests.post(
    'https://api.planet.com/data/v1/quick-search',
    auth=HTTPBasicAuth(PLANET_API_KEY, ''),
    json=search_request)

print(json.dumps(search_result.json(), indent=1))

Our search returns metadata for all of the images within our AOI that match our date range and cloud coverage filters. It looks like there are multiple images here; let's extract a list of just those image IDs:

In [ ]:
# extract image IDs only
image_ids = [feature['id'] for feature in search_result.json()['features']]
print(image_ids)

Since we just want a single image, and this is only a demonstration, for our purposes here we can arbitrarily select the first image in that list. Let's do that, and get the asset list available for that image:

In [ ]:
# For demo purposes, just grab the first image ID
id0 = image_ids[0]
id0_url = 'https://api.planet.com/data/v1/item-types/{}/items/{}/assets'.format(item_type, id0)

# Returns JSON metadata for assets in this ID. Learn more: planet.com/docs/reference/data-api/items-assets/#asset
result = \
  requests.get(
    id0_url,
    auth=HTTPBasicAuth(PLANET_API_KEY, '')
  )

# List of asset types available for this particular satellite image
print(result.json().keys())

Activation and Downloading

The Data API does not pre-generate assets, so they are not always immediately availiable to download. In order to download an asset, we first have to activate it.

Remember, earlier we decided we wanted a color-corrected image best suited for visual applications. We can check the status of the visual asset we want to download like so:

In [ ]:
# This is "inactive" if the "visual" asset has not yet been activated; otherwise 'active'
print(result.json()['visual']['status'])

Let's now go ahead and activate that asset for download:

In [ ]:
# Parse out useful links
links = result.json()[u"visual"]["_links"]
self_link = links["_self"]
activation_link = links["activate"]

# Request activation of the 'visual' asset:
activate_result = \
  requests.get(
    activation_link,
    auth=HTTPBasicAuth(PLANET_API_KEY, '')
  )

At this point, we wait for the activation status for the asset we are requesting to change from inactive to active. We can monitor this by polling the "status" of the asset:

In [ ]:
activation_status_result = \
  requests.get(
    self_link,
    auth=HTTPBasicAuth(PLANET_API_KEY, '')
  )
    
print(activation_status_result.json()["status"])

Once the asset has finished activating (status is "active"), we can download it.

Note: the download link on an active asset is temporary

In [ ]:
# Image can be downloaded by making a GET with your Planet API key, from here:
download_link = activation_status_result.json()["location"]
print(download_link)

stockton_thumb.png