Skip to article frontmatterSkip to article content

ICESat-2 and Landsat cloud access and data integration

This notebook (download <IS2_cloud_data_integration.ipynb>) builds off of the icepyx IS2_cloud_data_access.ipynb and ICESat-2 Hackweek Data Integration 1 tutorials. It illustrates the use of icepyx for accessing ICESat-2 data currently available through the AWS (Amazon Web Services) us-west-2 hub s3 data bucket as well as data integration with Landsat (cloud-optimized geotiff) and ATM (downloaded csv) datasets.

Computing environment

We’ll be using the following open source Python libraries in this notebook:

# Suppress library deprecation warnings
import logging
logging.captureWarnings(True)
from ipyleaflet import Map, basemaps, basemap_to_tiles, Polyline
%matplotlib widget
import pystac_client
import geopandas as gpd
import h5py
import pandas as pd
import geoviews as gv
import intake_stac
import xarray as xr
import matplotlib.pyplot as plt
import boto3
from rasterio.session import AWSSession
from rasterio.plot import show
import os
import fiona

1. Identify and acquire the ICESat-2 product(s) of interest

Download ICESat-2 ATL06 data from desired region

We are going to use icepyx to download some ICESat-2 ATL06 data over our region of interest.

import icepyx as ipx
# Specifying the necessary icepyx parameters
short_name = 'ATL06'
parent_dir = os.path.join(os.getcwd(),'book/tutorials')
spatial_extent = os.path.join(parent_dir, 'hackweek_kml_jakobshavan.kml') # KML polygon centered on Sermeq Kujalleq
date_range = ['2019-04-01', '2019-04-30']
rgts = ['338'] # IS-2 RGT of interest

You may notice that we specified a RGT track. As seen below, a large number of ICESat-2 overpasses occur for Sermeq Kujalleq (briefly known as Jakobshavn Isbrae). In the interest of time (and computer memory), we are going to look at only one of these tracks.

# Open KML file for use
fiona.drvsupport.supported_drivers['LIBKML'] = 'rw' # enable KML support which is disabled by default
jk = gpd.read_file(spatial_extent)
# Setup the Query object
region = ipx.Query(short_name, spatial_extent, date_range, tracks=rgts)
# Visualize area of interest
region.visualize_spatial_extent()

Looks good! Now it’s time to acquire the data.

Get the granule s3 urls

You must specify cloud=True to get the needed s3 urls. This function returns a list containing the list of the granule IDs and a list of the corresponding urls.

gran_ids = region.avail_granules(ids=True, cloud=True)
gran_ids

Select an s3 url and set up an s3 file system

icepyx enables users to read ICESat-2 data directly from the cloud into an Xarray dataset. However, here we show an alternative approach that instead sets up an s3 file system (essentially mocking the way your local file system works) to access an ICESat-2 granule. This latter option requires we do some manual handling of s3 credentials (this all happens behind the scenes with the Xarray approach). In both cases, you will be prompted for your Earthdata login if you do not have automatic authentication set up.

import s3fs
# Authenicate using your NASA Earth Data login credentials; enter your user id and password when prompted
credentials = region.s3login_credentials
s3 = s3fs.S3FileSystem(key=credentials['accessKeyId'],
                       secret=credentials['secretAccessKey'],
                       token=credentials['sessionToken'])
# the first index, [1], gets us into the list of s3 urls
# the second index, [0], gets us the first entry in that list.
s3url = gran_ids[1][0]
# Open the file
%time f = h5py.File(s3.open(s3url,'rb'),'r')
# View its attributes
list(f.keys())

Reading the file with h5py allows us to open the entire file, but is not super intuitive for later analysis. Let’s use h5py with pandas to open the data into DataFrames in a way that is more convenient for our analyses.

# Load the ICESat-2 data. We will just look at the central beams (GT2R/L)
# is2_file = 'processed_ATL06_20190420093051_03380303_005_01_full.h5'
with h5py.File(s3.open(s3url,'rb'), 'r') as f:
    is2_gt2r = pd.DataFrame(data={'lat': f['gt2r/land_ice_segments/latitude'][:],
                                  'lon': f['gt2r/land_ice_segments/longitude'][:],
                                  'elev': f['gt2r/land_ice_segments/h_li'][:]})
    is2_gt2l = pd.DataFrame(data={'lat': f['gt2l/land_ice_segments/latitude'][:],
                                  'lon': f['gt2l/land_ice_segments/longitude'][:],
                                  'elev': f['gt2l/land_ice_segments/h_li'][:]})
    
is2_gt2r.head()

We opened this data into a pandas DataFrame, which is a handy tool for Earth data exploration and analysis. The column names derive automatically from the first row of the h5 file and each row corresponds to an ICESat-2 measurement.

For a tutorial on how to use pandas on this data, check out the ICESat-2 Hackweek Data Integration I tutorial. You can learn more about pandas from this cookbook.

2. Acquire non-cloud data and open: ATM data access

Now we show how we access Airborne Topographic Mapper (non-AWS) lidar spot measurements to co-register with the ICESat-2 data.

An airborne campaign called Operation IceBridge was flown across Sermeq Kujalleq as validation for ICESat-2. Onboard was the ATM, a lidar that works at both 532 nm (like ICESat-2) and 1064 nm (near-infrared). More information about Operation IceBridge and ATM may be found here: https://nsidc.org/data/icebridge. Because both data sets are rather large, this can be computationally expensive, so we will only consider one flight track with the ATM 532 nm beam.

Operation IceBridge data is not available on the cloud, so this data was downloaded directly from NSIDC. If you are interested in using IceBridge data, NSIDC has a useful data portal here: https://nsidc.org/icebridge/portal/map

Co-register ICESat-2 with ATM data

# Load the ATM data into a DataFrame
atm_file = os.path.join(parent_dir, 'ILATM2_20190506_151600_smooth_nadir3seg_50pt.csv')
atm_l2 = pd.read_csv(atm_file)

atm_l2.head()

The ATM L2 file contains plenty of information, including surface height estimates and slope of the local topography. It also contains a track identifier - ATM takes measurements from multiple parts of the aircraft, namely starboard, port, and nadir. To keep things simple, we will filter the DataFrame to only look at the nadir track (Track_Identifier = 0).

atm_l2 = atm_l2[atm_l2['Track_Identifier']==0]

# Change the longitudes to be consistent with ICESat-2
atm_l2['Longitude(deg)'] -= 360

print(atm_l2.size)

Let’s take a quick look at where ATM is relative to ICESat-2...

# Subset the ICESat-2 data to the ATM latitudes
is2_gt2r = is2_gt2r[(is2_gt2r['lat']<atm_l2['Latitude(deg)'].max()) & (is2_gt2r['lat']>atm_l2['Latitude(deg)'].min())]
is2_gt2l = is2_gt2l[(is2_gt2l['lat']<atm_l2['Latitude(deg)'].max()) & (is2_gt2l['lat']>atm_l2['Latitude(deg)'].min())]


# Set up a map with the flight tracks as overlays
m = Map(
    basemap=basemap_to_tiles(basemaps.Esri.WorldImagery),
    center=(69.25, 310.35-360),
    zoom=10
)

gt2r_line = Polyline(
    locations=[
        [is2_gt2r['lat'].min(), is2_gt2r['lon'].max()],
        [is2_gt2r['lat'].max(), is2_gt2r['lon'].min()]
    ],
    color="green" ,
    fill=False
)
m.add(gt2r_line)

gt2l_line = Polyline(
    locations=[
        [is2_gt2l['lat'].min(), is2_gt2l['lon'].max()],
        [is2_gt2l['lat'].max(), is2_gt2l['lon'].min()]
    ],
    color="green" ,
    fill=False
)
m.add(gt2l_line)

atm_line = Polyline(
    locations=[
        [atm_l2['Latitude(deg)'].min(), atm_l2['Longitude(deg)'].max()],
        [atm_l2['Latitude(deg)'].max(), atm_l2['Longitude(deg)'].min()]
    ],
    color="orange" ,
    fill=False
)
m.add(atm_line)

m

Looks like ATM aligns very closely with the left beam (GT2L), so hopefully the two beams will agree. The terrain over this region is quite rough, so we may expect some differences between ATM and GT2R. ICESat-2 also flew over Sermeq Kujalleq 16 days before ATM, so there might be slight differences due to ice movement.

We have looked at how we can quickly access ICESat-2 and airborne lidar data, and process them using pandas.

3. Search and open (Landsat) raster imagery from the cloud

Let’s now talk about a cloud-optimized approach that requires no downloading to search and access only the subsets of the data we want. Cloud-optimized formats (e.g., COG, zarr, parquet) make reading data two orders of magnitude faster than non-optimized formats.

We will be working with Cloud Optimized GeoTIFF (COG). A COG is a GeoTIFF file with an internal organization that enables more efficient workflows and prevents having to open the entire image (see more at https://www.cogeo.org/).

Here is the User Manual for more information about accessing Landsat S3.

Search for Landsat imagery

To explore and access COG’s easily we will use a SpatioTemporal Asset Catalog (STAC). The STAC provides a common metadata format to make it easier to index and querry S3 buckets for geospatial data.

# Sets up AWS credentials for acquiring images through dask/xarray
os.environ["AWS_REQUEST_PAYER"] = "requester"

# Sets up proper AWS credentials for acquiring data through rasterio
aws_session = AWSSession(boto3.Session(), requester_pays=True)

Extract geometry bounds are extracted from the ICESat-2 KML file used above so that we can perform the Landsat spatial search.

# Extract geometry bounds
geom = jk.geometry[0]
print(geom.bounds)

We will search for imagery in STAC catalog using the pystac_client search tool.

# Search STAC API for Landsat images based on a bounding box, date and other metadata if desired

bbox = (geom.bounds[0], geom.bounds[1], geom.bounds[2], geom.bounds[3]) #(west, south, east, north) 

timeRange = '2019-05-06/2019-05-07'
url = 'https://landsatlook.usgs.gov/stac-server'
collection = 'landsat-c2l1' # Landsat Collection 2, Level 1
    
api = pystac_client.Client.open(url)

items = api.search(
            bbox = bbox,
            datetime = timeRange,
            limit = 400, # This line not required
            collections=collection
        ).item_collection()
    
print(f'{len(items)} items')

# Write a json file that records our search output
gjson_outfile = f'/tmp/Landsat.geojson'
items.save_object(gjson_outfile)

We can include property searches, such as path, row, cloud-cover, as well with the properties flag in the api.search.

We are given a pystac collection of items (images)

items

Load the geojson file into geopandas and inspect the items we want to collect

# Load the geojson file
gf = gpd.read_file(gjson_outfile)
gf.head(2)
# Plot search area of interest and frames on a map using Holoviz Libraries (more on these later)
cols = gf.loc[:,('id','landsat:wrs_path','landsat:wrs_row','geometry')]
footprints = cols.hvplot(geo=True, line_color='k', hover_cols=['landsat:wrs_path','landsat:wrs_row'], alpha=0.3, title='Landsat 8 T1',tiles='ESRI')
tiles = gv.tile_sources.CartoEco.options(width=700, height=500) 
labels = gv.tile_sources.StamenLabels.options(level='annotation')
tiles * footprints * labels

Intake all scenes using the intake-STAC library

Intake-STAC facilitates discovering, exploring, and loading spatio-temporal datasets by providing Intake Drivers for STAC catalogs. This provides a simple toolkit for working with STAC catalogs and for loading STAC assets as xarray objects.

catalog = intake_stac.catalog.StacItemCollection(items)
list(catalog)

Let’s explore the metadata and keys for the first scene

sceneids = list(catalog)
item3 = catalog[sceneids[1]]
# item3.metadata
for keys in item3.keys():
    print (keys)

We can explore the metadata for any of these:

item3['blue'].metadata
items[1]
# Either of these codes provide the url needed to grab data from the S3 bucket using the intake-STAC catalog
print(item3.blue.metadata['alternate']['s3']['href']) # must use item asset name (blue)
print (items[1].assets['blue'].extra_fields['alternate']['s3']['href'])

Open and visualize each image using RasterIO

import rasterio as rio

# Retrieve first scene using rio
item_url = items[1].assets['blue'].extra_fields['alternate']['s3']['href']

# Read and plot with grid coordinates 
with rio.Env(aws_session):
    with rio.open(item_url) as src:
        fig, ax = plt.subplots(figsize=(9,8))
        
        # To plot
        show(src,1)
        
        # To open data into a numpy array
        profile = src.profile
        arr = src.read(1)

We can open directly into xarray using rasterIO...

Manipulating data in Xarray

Pandas and xarray have very similar structures and ways of manipulating data, but pandas excels with 2-D data and xarray is ideal for higher dimension data. Xarray introduces labels in the form of dimensions, coordinates and attributes on top of Pandas-like DataFrames.

We will only scratch the surface here on what xarray can do. To learn more, there are great xarray tutorials here: https://xarray-contrib.github.io/xarray-tutorial/online-tutorial-series/01_xarray_fundamentals.html

RasterIO and RioXarray

We can use rasterIO to easily open into an xarray DataSet:

rastxr = xr.open_dataset(item_url,engine='rasterio')
rastxr

We can also open using rioxarray, which integrates rasterIO and xarray and is the most efficient way of opening using rasterIO:

import rioxarray as rxr

rastrxr = rxr.open_rasterio(item_url)
rastrxr

We can see Attributes have been added to the xarray using the same url.

Beyond what xarray and rasterIO provide, rioxarray has these added benefits (plus others):

  • Supports multidimensional datasets such as netCDF
  • Loads in the CRS, transform, and nodata metadata in standard CF & GDAL locations
  • Supports masking and scaling data
  • Loads raster metadata into the attributes

For more info: https://corteva.github.io/rioxarray/stable/index.html

Dask

Another convenient means for opening a lot of raster data into xarray is using dask. Xarray integrates with Dask to support parallel computations and streaming computation on datasets that don’t fit into memory. So this is perfect when you want to process a lot of data.

Dask divides arrays into many small pieces, called chunks, each of which is presumed to be small enough to fit into memory.

Unlike NumPy, which has eager evaluation, operations on dask arrays are lazy. Operations queue up a series of tasks mapped over blocks, and no computation is performed until you actually ask values to be computed (e.g., to print results to your screen or write to disk). At that point, data is loaded into memory and computation proceeds in a streaming fashion, block-by-block.

To expand our xarray toolbox for working with larger data sets that we don’t necessarily want entirely in memory, we will start by reading in 3 bands of a Landsat scene to xarray using dask.

sceneid = items[1]
print (sceneid.id)

band_names = ['red','green','blue']

bands = []
        
# Construct xarray for scene
for band_name in band_names:
    # Specify chunk size (x,y), Landsat COG is natively in 512 chunks so is best to use this or a multiple
    asset = sceneid.assets[band_name]
    href = asset.extra_fields['alternate']['s3']['href']
    band = xr.open_dataset(href, engine='rasterio', chunks=dict(band=1,x=512, y=512))
    band['band'] = [band_name]
    bands.append(band)
scene = xr.concat(bands, dim='band')
scene

Typically, it’s best to align dask chunks with the way image chunks (typically called “tiles”) are stored on disk or cloud storage buckets. The landsat data is stored on AWS S3 in a tiled Geotiff format where tiles are 512x512, so we should pick some multiple of that, and typically aim for chunk sizes of ~100Mb (although this is subjective).

In a way that is similar to pandas, we can explore variables easily in xarray. We will first work with coordinates (equivalent to indices in pandas). Here x might often be the longitude (it can be renamed to this actually):

scene.x

We can also keep track of arbitrary metadata (called attributes) in the form of a Python dictionary, including the crs (projection). The crs here is epsg:32622.

We can apply operations over dimensions by name. Here, if we want to slice the data to only have the blue band (data is found under band_data):

scene['band_data'].sel(band='blue')

instead of loc (from pandas) we are using sel, but they function synonymously.

Mathematical operations (e.g., x - y) vectorize across multiple dimensions (array broadcasting) based on dimension names. Let’s determine the mean reflectance for the blue band:

scene['band_data'].sel(band='blue').mean().values

And you can easily use the split-apply-combine paradigm with groupby, which we won’t show here.

Advanced multi-dimensional read-ins, manipulation and visualization

Now let’s open all the bands and multiple days together into an xarray. This will be a more complex xarray with an extra 'time' dimension. We have two image in the catalog we can include together.

sceneids = list(catalog)
sceneids

Let’s create the time variable first for the xarray time dimension. This finds the desired scene IDs in the geopandas dataframe we have above and extracts their ‘datetime’ information. These datetimes get recorded into an xarray variable object for ‘time’.

# Create time variable for time dim
time_var = xr.Variable('time',gf.loc[gf.id.isin(sceneids)]['datetime'])
time_var

Now we will search and collect band names for grabbing each desired band. We will just grab the bands that have 30 m pixels. This provides an example of how you can search data in the STAC catalog.

band_names = []

# Get band names for the bands with 30 m resolution from the second scene in our sceneids
sceneid = catalog[sceneids[1]]
for k in sceneid.keys():
    M = getattr(sceneid, k).metadata
    if 'eo:bands' in M:
        resol = M['eo:bands'][0]['gsd']
        print(k, resol)
        if resol == 30: 
            band_names.append(k)
            
# Add qa band
band_names.append('qa_pixel')

And now open all of it...

# Import to xarray
# In xarray dataframe nans are in locations where concat of multiple scenes has expanded the grid (i.e. different path/rows).
scenes = []

for sceneid in items:
    print (sceneid.id)

    bands = []

    # Construct xarray for scene, open each band, append and concatenate together to create a scene, 
    # then append and concatenate each scene to create the full dataframe 
    for band_name in band_names:
        asset = sceneid.assets[band_name]
        href = asset.extra_fields['alternate']['s3']['href']
        band = xr.open_dataset(href, engine='rasterio', chunks=dict(band=1,x=512, y=512))
        band['band'] = [band_name]
        bands.append(band)
    scene = xr.concat(bands, dim='band')
    scenes.append(scene)

# Concatenate scenes with time variable
ls_scenes = xr.concat(scenes, dim=time_var)

ls_scenes

We now have 2 Landsat scenes with all of the bands we are interested in stored in an xarray, but you can imagine that this exact code can scale to years worth of data and bands.

From here, we easily subset one image at a time or the entire xarray:

sbands = ['blue', 'nir08', 'swir16']

# Select the first datetime
t = ls_scenes.time.values[1]
print (t)

# Upper left and lower right coordinates for subsetting to Sermeq Kujalleq area
ulx = 300000
uly = 7695000
lrx = 330000
lry = 7670000

# Subset xarray to specific time, bands, and x/y locations
image = ls_scenes['band_data'].sel(time=t,band=sbands,y=slice(lry,uly),x=slice(ulx,lrx))
# Plot only the blue band
fig, ax = plt.subplots(figsize=(8,6))
image.sel(band='blue').plot()

Since this data is in UTM 22N, we can reproject to the standard lat/lon coordinate system (WGS-84) and map with the ICESat-2 and ATM lines. We must first assign the correct projection to the data using rioxarray write_crs. Then we reproject to WGS-84. We manually add the max/min longitudes for the altimetry data.

image.rio.write_crs("epsg:32622", inplace=True) #inplace allows us to do this without adding the variable equals to the left side
image = image.rio.reproject(4326)

ISlats = [is2_gt2r['lat'].min(), is2_gt2r['lat'].max()]
# ISlons = (is2_gt2r['lon'].max(), is2_gt2r['lon'].min())
ISlons = [-55.624,-55.646]

ATMlats = [atm_l2['Latitude(deg)'].min(), atm_l2['Latitude(deg)'].max()]
# ATMlons = [atm_l2['Longitude(deg)'].max(), atm_l2['Longitude(deg)'].min()]
ATMlons = [-55.624,-55.646]

fig, ax = plt.subplots(figsize=(8,6))
image.sel(band='blue').plot()
plt.plot(ISlons,ISlats,color = 'green')
plt.plot(ATMlons,ATMlats,color = 'orange')

4. Summary

Congratulations! You’ve completed the tutorial. In this tutorial you have gained the skills to:

  • Search for both optimized and non-optimized cloud data
  • Open data into pandas and xarray dataframes/arrays, and
  • Manipulate, visualize, and explore the data

We have concluded by mapping the three data sets together.

Credits

  • notebook by: Jessica Scheick, Tasha Snow, Zach Fair, Ian Joughin
  • source material: is2-nsidc-cloud.py by Brad Lipovsky