### March 22 2021

Normalized Difference Vegetation Index (NDVI), developed by a NASA scientist named Compton Tucker in 1977, is commonly used to assess whether an area contains live green vegetation or not. It can show the difference between water and plants, bare soil and grass, whether plants are under stress, and what lifecycle stage a crop is in.

It compares how much more near-infrared light is reflected by chlorophyll vs visible red light:

In this guide we will perform a basic NDVI calculation in python, using Planet's 4-Band imagery.

You will need,

export PLANET_API_KEY=a3a64774d30c4749826b6be445489d3b # (not a real key)


You can find Numpy installation instructions here.

To install the Python libraries, do:

pip install rasterio
pip install matplotlib
pip install retrying
pip install requests



A note about Planet Imagery Product Compatability: this tutorial is compatable with only analytic and basic_analytic asset types of the following ItemTypes:

• PSOrthoTile
• REOrthoTile
• PSScene4Band

First, download a 4-band Planetscope image of agricultural land in Toledo, Spain (id: 20161218_101700_0e0d). You can do this using the Planet API, or with Planet Explorer, by filtering for '4 Band PlanetScope scene' (PSScene4Band) or 'Planetscope ortho tile' (PSOrthoTile), and downloading an 'analytic' asset.

Using the Planet CLI:

planet data download --item-type PSScene4Band --asset-type analytic,analytic_xml --string-in id 20161218_101700_0e0d


You now have a file called '20161228_101647_0e26_3B_AnalyticMS.tif' in your directory.

Thumbnail preview of the 4-band scene

## Extract the Visible Red and NIR bands

In a python script, open the image and extract just the Red and Near-infrared bands, numbers 3 and 4.

import rasterio
import numpy

image_file = "20161228_101647_0e26_3B_AnalyticMS.tif"

# Load red and NIR bands - note all PlanetScope 4-band images have band order BGRN
with rasterio.open(image_file) as src:

with rasterio.open(image_file) as src:


## Normalize to Top of Atmosphere Reflectance

Converting the pixel values to TOA Reflectance makes the analysis more accurate, and comparable with other scenes. Load the TOA Reflectance coefficients from the metadata XML asset.

from xml.dom import minidom

# XML parser refers to bands by numbers 1-4
coeffs = {}
for node in nodes:
bn = node.getElementsByTagName("ps:bandNumber")[0].firstChild.data
if bn in ['1', '2', '3', '4']:
i = int(bn)
value = node.getElementsByTagName("ps:reflectanceCoefficient")[0].firstChild.data
coeffs[i] = float(value)


Multiply the band values by the TOA Reflectance coefficients.

# Multiply by corresponding coefficients
band_red = band_red * coeffs[3]
band_nir = band_nir * coeffs[4]


## Perform the NDVI calculation

Next we perform the NDVI calculation through subtraction and division of the pixel values.

# Allow division by zero
numpy.seterr(divide='ignore', invalid='ignore')

# Calculate NDVI
ndvi = (band_nir.astype(float) - band_red.astype(float)) / (band_nir + band_red)


## Save the NDVI image

Finally we output these new pixel values to a new image file, making sure we mirror the GeoTIFF spatial metadata:

# Set spatial characteristics of the output object to mirror the input
kwargs = src.meta
kwargs.update(
dtype=rasterio.float32,
count = 1)

# Create the file
with rasterio.open('ndvi.tif', 'w', **kwargs) as dst:
dst.write_band(1, ndvi.astype(rasterio.float32))


## Apply a color map

Applying a color map can help to visually distinguish vegetation. For more color map options check out the docs for matplotlib.

import matplotlib.pyplot as plt
plt.imsave("ndvi_cmap.png", ndvi, cmap=plt.cm.summer)


# Getting started with the Data API¶

### Let's search & download some imagery of farmland near Stockton, CA. Here are the steps we'll follow:¶

1. Define an Area of Interest (AOI)
2. Save our AOI's coordinates to GeoJSON format
3. Create a few search filters
4. Search for imagery using those filters

## Define an Area of Interest¶

An Area of Interest (or AOI) is how we define the geographic "window" out of which we want to get data.

For the Data API, this could be a simple bounding box with four corners, or a more complex shape, as long as the definition is in GeoJSON format.

For this example, let's just use a simple box. To make it easy, I'll use geojson.io to quickly draw a shape & generate GeoJSON output for our box:

We only need the "geometry" object for our Data API request:

In [ ]:
# Stockton, CA bounding box (created via geojson.io)
geojson_geometry = {
"type": "Polygon",
"coordinates": [
[
[-121.59290313720705, 37.93444993515032],
[-121.27017974853516, 37.93444993515032],
[-121.27017974853516, 38.065932950547484],
[-121.59290313720705, 38.065932950547484],
[-121.59290313720705, 37.93444993515032]
]
]
}


## Create Filters¶

Now let's set up some filters to further constrain our Data API search:

In [ ]:
# get images that overlap with our AOI
geometry_filter = {
"type": "GeometryFilter",
"field_name": "geometry",
"config": geojson_geometry
}

# get images acquired within a date range
date_range_filter = {
"type": "DateRangeFilter",
"field_name": "acquired",
"config": {
"gte": "2016-08-31T00:00:00.000Z",
"lte": "2016-09-01T00:00:00.000Z"
}
}

# only get images which have <50% cloud coverage
cloud_cover_filter = {
"type": "RangeFilter",
"field_name": "cloud_cover",
"config": {
"lte": 0.5
}
}

# combine our geo, date, cloud filters
combined_filter = {
"type": "AndFilter",
"config": [geometry_filter, date_range_filter, cloud_cover_filter]
}


## Searching: Items and Assets¶

Planet's products are categorized as items and assets: an item is a single picture taken by a satellite at a certain time. Items have multiple asset types including the image in different formats, along with supporting metadata files.

For this demonstration, let's get a satellite image that is best suited for visual applications; e.g., basemaps or visual analysis. Since we're not doing any spectral analysis outside of the visual range, we only need a 3-band (RGB) image. To get the image we want, we will specify an item type of PSScene3Band, and asset type visual.

Now let's search for all the items that match our filters:

In [ ]:
import os
import json
import requests
from requests.auth import HTTPBasicAuth

# API Key stored as an env variable
PLANET_API_KEY = os.getenv('PL_API_KEY')

item_type = "PSScene3Band"

# API request object
search_request = {
"interval": "day",
"item_types": [item_type],
"filter": combined_filter
}

# fire off the POST request
search_result = \
requests.post(
'https://api.planet.com/data/v1/quick-search',
auth=HTTPBasicAuth(PLANET_API_KEY, ''),
json=search_request)

print(json.dumps(search_result.json(), indent=1))


Our search returns metadata for all of the images within our AOI that match our date range and cloud coverage filters. It looks like there are multiple images here; let's extract a list of just those image IDs:

In [ ]:
# extract image IDs only
image_ids = [feature['id'] for feature in search_result.json()['features']]
print(image_ids)


Since we just want a single image, and this is only a demonstration, for our purposes here we can arbitrarily select the first image in that list. Let's do that, and get the asset list available for that image:

In [ ]:
# For demo purposes, just grab the first image ID
id0 = image_ids[0]
id0_url = 'https://api.planet.com/data/v1/item-types/{}/items/{}/assets'.format(item_type, id0)

result = \
requests.get(
id0_url,
auth=HTTPBasicAuth(PLANET_API_KEY, '')
)

# List of asset types available for this particular satellite image
print(result.json().keys())


The Data API does not pre-generate assets, so they are not always immediately availiable to download. In order to download an asset, we first have to activate it.

Remember, earlier we decided we wanted a color-corrected image best suited for visual applications. We can check the status of the visual asset we want to download like so:

In [ ]:
# This is "inactive" if the "visual" asset has not yet been activated; otherwise 'active'
print(result.json()['visual']['status'])


In [ ]:
# Parse out useful links

# Request activation of the 'visual' asset:
activate_result = \
requests.get(
auth=HTTPBasicAuth(PLANET_API_KEY, '')
)


At this point, we wait for the activation status for the asset we are requesting to change from inactive to active. We can monitor this by polling the "status" of the asset:

In [ ]:
activation_status_result = \
requests.get(
auth=HTTPBasicAuth(PLANET_API_KEY, '')
)

print(activation_status_result.json()["status"])


Once the asset has finished activating (status is "active"), we can download it.

In [ ]:
# Image can be downloaded by making a GET with your Planet API key, from here:


# Converting PlanetScope Imagery from Radiance to Reflectance¶

Planet Labs' analytic data products (both Rapideye and Dove) are reported in units of radiance: $W*m^{-2}*sr^{-1}$. That means that every pixel in an analytic tiff has a natural language interpretation: "How much light was captured over this spot of ground?"

But over the course of a day or year, the number of photons that the sun shines on the scene rises and falls. If you naively compare the radiance values of two scenes over the same spot on Earth from the same satellite but a few weeks (or even just hours!) apart, you will likely find dramatic radiance differences even if nothing on the ground has changed!

In addition to this variation, each of Planet Labs' 137 satllites have small amounts of variation in their spectral filters which yields slight differences in radiance measurements, even from two satellites taking pictures of the same exact place at the same exact moment!

To correct all this radiance variation you would have to do a lot of math using the exact location and local time of day to find the angle to the sun and sun-earth distance, then compute a solar irradiance model to estimate how many photons of each band are present in the image, and finally convolve that spectrum with the spectral response of the individual satellite to yield the number of photons of each band that are actually recorded by that sensor. Dividing by this number normalizes the measured brightness to the brightness of the sun at that time and place through that particular filter, yielding a much more comparable number: reflectance.

Top of Atmosphere Reflectance is extremely useful because it is an apples-to-apples comparable number from any satellite over any location that does not change with time of day or time of year unless the content of the scene changes. It is very commonly used in GIS applications which compute spectral indices such as NDVI or NDWI.

It is so broadly useful that Planet does this math for you and provides all the coefficients necessary to convert a radiance image into a reflectance image!

$$\begin{equation*} \mathbf{img}_{reflectance} = \begin{vmatrix} a \\ b \\ c \\ d \end{vmatrix} \times \mathbf{img}_{radiance} \end{equation*}$$

The four coefficients $a, b, c, d$ are calculated and provided with every analytic image that Planet provides and can be used as simple scaling factors to convert from radiance to reflectance. Their values change with the image's local time of day and time of year, and do so uniquely per satellite.

In this guide, you'll perform a basic Radiance to Reflectance calculation on PlanetScope imagery using just a few lines of Python. Here are the steps:

2. Extract data from each spectral band
3. Extract the coefficients
5. Save the Reflectance
6. Apply a color scheme to visualize reflectance on a map

### Requirements¶

First, you're going to download a 4-band PlanetScope satellite image of agricultural land in California's Central Valley, captured in mid July 2017 (item-id: 20170623_180038_0f34). You can do this using Planet's Python client to interact with our Data API, or by browsing Planet Explorer, filtering for 4 Band PlanetScope scene (PSScene4Band) or Planetscope ortho tile (PSOrthoTile), and downloading an analytic asset.

Before you download the full image, you can preview a thumbnail of the image via Planet's Data API. (The thumbnails are 256x256 by default, and can be scaled up by passing in a width parameter.)

In [2]:
from IPython.display import Image
Image(url="https://api.planet.com/data/v1/item-types/PSScene4Band/items/20170623_180038_0f34/thumb?width=512")

Out[2]:

Next, you'll use Planet's Python client to download the image. Note: when you run this command, you'll get a stream of messages in your Jupyter notebook as the Python client polls the Data API to determine if the image is activated and ready to download.

In [2]:
!planet data download --item-type PSScene4Band --asset-type analytic,analytic_xml --string-in id 20170623_180038_0f34

{"item": "20170623_180038_0f34", "asset": "analytic", "location": "/Users/matthew/Documents/git/notebooks/jupyter-notebooks/toar/20170623_180038_0f34_3B_AnalyticMS.tif"}
{"item": "20170623_180038_0f34", "asset": "analytic_xml", "location": "/Users/matthew/Documents/git/notebooks/jupyter-notebooks/toar/20170623_180038_0f34_3B_AnalyticMS_metadata.xml"}


Congratulations! You now have two files in your download directory: 20170623_180038_0f34_3B_AnalyticMS.tif and 20170623_180038_0f34_3B_AnalyticMS_metadata.xml. The first file is a GeoTIFF, the image you requested with spatial reference data embedded. The second file is a metadata file for that image that includes the data you'll need to convert from radiance to reflectance.

## Step 2. Extract the Data from Each Spectral Band¶

In this step, you'll use Rasterio, a Python library for reading and writing geospatial raster datasets, to open the raster image you downloaded (the .tif file). Then you'll extract the data from the red and near-infrared bands and load the band data into arrays that you can manipulate using Python's NumPy libary. Note: in PlanetScope 4-band images, the band order is BGRN: (1) Blue, (2) Green, (3) Red, (4) Near-infrared.

In [3]:
import rasterio
import numpy as np

filename = "20170623_180038_0f34_3B_AnalyticMS.tif"

# Load red and NIR bands - note all PlanetScope 4-band images have band order BGRN
with rasterio.open(filename) as src:

with rasterio.open(filename) as src:

with rasterio.open(filename) as src:

with rasterio.open(filename) as src:


## Step 3. Extract the Coefficients¶

Before you can convert to reflectance, you must extract the conversion coefficients from the metadata file you downloaded (the .xml file).

In [17]:
from xml.dom import minidom

# XML parser refers to bands by numbers 1-4
coeffs = {}
for node in nodes:
bn = node.getElementsByTagName("ps:bandNumber")[0].firstChild.data
if bn in ['1', '2', '3', '4']:
i = int(bn)
value = node.getElementsByTagName("ps:reflectanceCoefficient")[0].firstChild.data
coeffs[i] = float(value)

print "Conversion coefficients:", coeffs

Conversion coefficients: {1: 1.92992290459e-05, 2: 2.0401521894e-05, 3: 2.2723104298e-05, 4: 3.35095412863e-05}


Note that the coefficients are all of order 1e-5, and that the coefficient for NIR is significantly higher than the coefficient for blue. This is a big deal if your use case involves performing band math because a pixel with a NIR/blue ratio of 1.0 in the radiance image will have a NIR/blue ratio of 3.35/1.929=1.73 in the reflectance image!

Most spectral indices are defined in terms of reflectance, not radiance.

## Step 4: Convert Radiance to Reflectance¶

Radiance is measured in SI units: W/m^2. Reflectance is a ratio from 0 to 1. The conversion is performed as a simple per-band scalar multiplication:

In [26]:
# Multiply the Digital Number (DN) values in each band by the TOA reflectance coefficients

import numpy as np
print "Red band reflectance is from {} to {}".format(np.amin(band_red_reflectance), np.amax(band_red_reflectance))

Red band radiance is from 0 to 25040
Red band reflectance is from 0.0 to 0.568986531622


## Step 5. Save the Reflectance Image¶

Next, you're going to save the calculated reflectance values to a new image file, making sure the new image file has the same geospatial metadata as the original GeoTIFF we downloaded.

In [30]:
# Set spatial characteristics of the output object to mirror the input
kwargs = src.meta
kwargs.update(
dtype=rasterio.uint16,
count = 4)

print "Before Scaling, red band reflectance is from {} to {}".format(np.amin(band_red_reflectance), np.amax(band_red_reflectance))

# Here we include a fixed scaling factor. This is common practice.
scale = 10000
blue_ref_scaled = scale * band_blue_reflectance
green_ref_scaled = scale * band_green_reflectance
red_ref_scaled = scale * band_red_reflectance
nir_ref_scaled = scale * band_nir_reflectance

print "After Scaling, red band reflectance is from {} to {}".format(np.amin(red_ref_scaled), np.amax(red_ref_scaled))

# Write band calculations to a new raster file
with rasterio.open('reflectance.tif', 'w', **kwargs) as dst:
dst.write_band(1, band_blue_reflectance.astype(rasterio.uint16))
dst.write_band(2, band_green_reflectance.astype(rasterio.uint16))
dst.write_band(3, band_red_reflectance.astype(rasterio.uint16))
dst.write_band(4, band_nir_reflectance.astype(rasterio.uint16))

Before Scaling, red band reflectance is from 0.0 to 0.568986531622
After Scaling, red band reflectance is from 0.0 to 5689.86531622


This warrants some explanation: Reflectance is generally defined as a floating point number between 0 and 1, but image file formats are much more commonly stored as unsigned integers. A common practice in the industry is to multiply the radiance value by 10,000, then save the result as a file with data type uint16.

## Step 6. Apply a Color Scheme to Visualize the Reflectance Values on the Image¶

In the last step, you'll use Matplotlib to visualize the reflectance values you calculated for the PlanetScope scene.

In [20]:
import matplotlib.pyplot as plt
import matplotlib.colors as colors

"""
The reflectance values will range from 0 to 1. You want to use a diverging color scheme to visualize the data,
and you want to center the colorbar at a defined midpoint. The class below allows you to normalize the colorbar.
"""

class MidpointNormalize(colors.Normalize):
"""
Normalise the colorbar so that diverging bars work there way either side from a prescribed midpoint value)
e.g. im=ax1.imshow(array, norm=MidpointNormalize(midpoint=0.,vmin=-100, vmax=100))
Credit: Joe Kington, http://chris35wills.github.io/matplotlib_diverging_colorbar/
"""
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
colors.Normalize.__init__(self, vmin, vmax, clip)

def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]

# Set min/max values from reflectance range for image (excluding NAN)
min=np.nanmin(band_nir_reflectance)
max=np.nanmax(band_nir_reflectance)
mid=0.20

fig = plt.figure(figsize=(20,10))

# diverging color scheme chosen from https://matplotlib.org/users/colormaps.html
# note that appending '_r' to the color scheme name reverses it!
cmap = plt.cm.get_cmap('RdGy_r')

cax = ax.imshow(band_nir_reflectance, cmap=cmap, clim=(min, max), norm=MidpointNormalize(midpoint=mid,vmin=min, vmax=max))

ax.axis('off')
ax.set_title('NIR Reflectance', fontsize=18, fontweight='bold')

cbar = fig.colorbar(cax, orientation='horizontal', shrink=0.65)