Skip to content

Google Earth Engine Exercise

This exercise provides a hands-on application of Google Earth Engine with reproducible codes that you can adapt to your own case studies. The online GEE platform relies on the Javascript programming language, which we don't have time to introduce here. The following examples are hopefully self-explanatory. Things to remember are:

  • Any new variable is defined using var (e.g., var myName = ...)
  • Each command is usually terminated by a semi-column ;
  • All GEE-specific functions have been linked to their respective documentation page. There is no shortcut: the documentation page is the place to understand what a function does and how it works.

The image below shows the main web-based interface. Go through the exercise below, copy the code snippets to the code part of the interface and click the Run button to run it. Feel free to fiddle with the parameters!

gee_interface

Part 1: DEM

In this part, we will download a tile of NASA's 30 m SRTM digital elevation model (DEM) using GEE. Visit the catalogue page for this dataset for more information. Specifically, we will see how to:

  1. Request and visualise the dataset on GEE
  2. Clip it to a region of interest (ROI)
  3. Create a hillshade for use as a basemap on your own maps
  4. Export both

This whole part focuses on the Big Island of Hawaii.

Load the DEM

Let's start by defining the dataset (srtm) and the polygon that will be used to define your ROI (roi):

  • srtm is a ee.Image → i.e. a raster dataset. The dataset description specifies that it has only one band called elevation.
  • ROI is a ee.Geometry.Polygon → i.e. a vector dataset. It is defined here as a rectangle, with all pairs of coordinates defining a different node of the polygon.
// Load the SRTM DEM data
var srtm = ee.Image('USGS/SRTMGL1_003').select('elevation');

// Define the region of interest (ROI) as a polygon
var roi = ee.Geometry.Polygon([
    [
        [-156.9, 18.9],  // Bottom left
        [-154.7, 18.9],  // Bottom right
        [-154.7, 20.3],  // Top right
        [-156.9, 20.3],  // Top left
        [-156.9, 18.9]   // Closing point (same as bottom left)
    ]
]);

Coordinates

In GEE, coordinates are often defined in pairs of lon/lat (→ x,y) rather than pairs on lat/lon (→ y,x).

Draw a polygon

You can also draw a polygon directly in the map view. Make sure you rename the geometry accordingly.

draw geometry

Visualise the DEM

Here we look at some useful parameters for data visualisation. We define two variables:

  • srtm_palette: A list of colours in Hex colour code used to define a colormap. This one is based on the terrain colormap defined in Matplotlib. These colours will be mapped over the range of elevations.
  • srtm_vis: A javascript dictionary used by GEE to specify the dataset's display properties. We use srtm_palette as the colormap and stretch it between elevations of -100 and 2000 m asl.
// Define the custom terrain colormap
var srtm_palette = ["#333399","#1470D6","#00A8D0","#15D06A","#75E37D","#D1F690","#E8E28D","#BAA774","#8A695A","#A38984","#D1C4C1","#FFFFFF"];

// Define visualization parameters for the SRTM DEM
var srtm_vis = {
    min: -100.0,
    max: 2000.0,
    palette: srtm_palette
};

We can now add the SRTM dataset to the map display. This is done in two steps:

  1. Set the map extent using Map.setCenter, which takes as arguments a pair of lon/lat coordinates used to center the map and a zoom level.
  2. Add the data using Map.addLayer, which takes 3 arguments:
    1. The data → srtm
    2. A visualisation dictionary → srtm_vis
    3. A name → A string of your own choice
Map.setCenter(-155.5, 19.5, 9);
Map.addLayer(srtm, srtm_vis, 'SRTM DEM');

Compute and visualise the hillshade

The DEM is interesting but doesn't really allow for an intuitive visualisation of the terrain. We commonly use a hillshade rather than a raw DEM for basemaps, which is an algorithm that projects an artificial light source mimicking the sun to reveal the terrain's properties. In GEE, this can be achieved using the ee.Terrain.hillshade function. It takes an image (→ srtm) as input. In order to accentuate the intensity of the hillshade, we can also multiply the raw DEM prior computing the hillshade.

// Set coefficient to accentuate the hillshade
var exaggeration = 3;
// Compute the hillshade
var hillshade = ee.Terrain.hillshade(srtm.multiply(exaggeration));
// Add to the map
Map.addLayer(hillshade, null, 'SRTM Hillshade');

Visualisation

Whereas a DEM stores elevation information at each pixel, the hillshade stores no strict data. Instead, it stores an information of intensity of grey value ranging from 1 (black) to 255 (white). Therefore, we don't really need a visualisation dictionary in this case, and we just pass null to the addLayer function.

Export the data

This step exports the data to a geotiff your Google Drive to later open it in QGIS using the Export.image.toDrive function. Note the scale argument: this defines the resolution at which the dataset will be exported.

  • In the case of the SRTM DEM, we can use a scale of 30 to retain the original 30 m resolution of the dataset.
  • You can also specify the EPSG code of the coordinate system you want to use.
// Export the DEM
Export.image.toDrive({
    image: srtm, // Clip the image to the ROI
    description: 'SRTM_DEM', // Task name (string)
    folder: 'EarthEngineExports', // Subfolder in your Google Drive (string)
    fileNamePrefix: 'SRTM_DEM', // Name of the exported file (string)
    region: roi, // The region to export (ee.Geometry)
    scale: 30, // The resolution in meters (integer)
    crs: 'EPSG:4326'
});

// Export the hillshade
Export.image.toDrive({
    image: hillshade, // Clip the image to the ROI
    description: 'SRTM_hillshade', // Task name (string)
    folder: 'EarthEngineExports', // Subfolder in your Google Drive (string)
    fileNamePrefix: 'SRTM_hillshade', // Name of the exported file (string)
    region: roi, // The region to export (ee.Geometry)
    scale: 30, // The resolution in meters (integer)
    crs: 'EPSG:4326'
});

You can now start the tasks from the Task panel in the editor. When completed, the exported file will appear in your Google Drive.

gee_task

Part 2: Sentinel 2 imagery

The second part will introduces some common manipulations for working with multispectral satellite imagery. We will use this Sentinel 2 dataset. There are a few notable differences compared to the SRTM dataset:

  • It is stored as a ee.ImageCollection, meaning that it contains multiple images along a dimension, here time (i.e., multiple images are acquired over time and regularly added to the collection) → this implies that we can filter the image collection through time.
  • Whereas the SRTM dataset contained a single band, the Sentinel 2 dataset contains multiple bandsthis implies that we can choose what band to use in the analysis*.

Sentinel 2 is a passive remote sensing sensor, meaning that it uses the light of the sun as a primary signal. This means that we face the issue of cloud cover. Sentinel 2 is also multispectral, meaning that the sensors capture images in different bands of the electromagnetic spectrum, reflected in bands 1-12 in the bands description.

Using La Palma in the Canary Islands as an example, we will:

  1. Explore how to access cloud-free imagery
  2. Investigate how bands captured in different parts of the electromagnetic spectrum can be used for different purposes
  3. See how we can create a cloud-free composite from multiple images in the collection

Generating single images

As before, we start by defining:

  1. A ROI for La Palma
  2. The dataset, here the COPERNICUS/S2_SR_HARMONIZED image collection, which we will filter spatially and temporally.
    • filterBounds keeps the images that intersect the la_palma_roi polygon;
    • filterDate filters images acquired within a time range.
  3. We print the information about the image collection, which will appear in the Console tab on the right of the window. By exploring the results, you can see that the collection contains 862 images, and that each image contains 26 bands. If you further inspect the properties of a single image, you will find a metadata called system:time_start. Keep this in mind for later.

While we're at it, let's also center the map.

// Center the map 
Map.setCenter(-17.87, 28.682, 11);

// Define the region of interest (ROI) for La Palma, Canary Islands
var la_palma_roi = ee.Geometry.Polygon([
    [
        [-18.005, 28.465],  // Southwest
        [-17.715, 28.465],  // Southeast
        [-17.715, 28.900],  // Northeast
        [-18.005, 28.900],  // Northwest
        [-18.005, 28.465]   // Closing point
    ]
]);

// Define the Sentinel 2 collection
var S2_col = ee.ImageCollection("COPERNICUS/S2_SR_HARMONIZED");

// Filter by ROI and time
var S2 = S2_col
    .filterBounds(la_palma_roi)
    .filterDate('2022-05-01', '2024-08-31');

// Print the information of the image collection 
print(S2)

Setting up the visualisation

Let's now setup the visualisation dictionary. The visualisation is slightly different here. For the SRTM dataset, we used a colormap mapped on the elevation band. Here we want to use any of the bands in the Sentinel 2 collection and assign them to the RGB channels of the displayed image. We will start by using bands captured in the visible portion of the electromagnetic spectrum:

Sentinel 2 band Image channel
B4 Red
B3 Green
B2 Blue

We also use the gamma parameter to slightly increase the brightness of the image

// Define the visualisation dictionary for RGB
var S2_vis_RGB = {
    bands: ['B4', 'B3', 'B2'], // RGB channels
    min: 0,         // You can lower 'min' to brighten shadows/dark areas
    max: 3000,      // 'max' controls the upper brightness
    gamma: 1.2      // 'gamma' can be used to brighten the image (values >1 brighten)
};

Accessing the most recent image

Let's now get the latest image in the collection and visualise it.

  • The first part of the code filters the collection spatially using filterBounds to filter the images that overlap with la_palma_roi.
  • The second part of the code filters the collection temporally: it uses sort to filter the collection according to the metadata system:time_start in descending order (i.e., latest acquisition time first) and takes the first image using first.

We then print the resulting variable, which is now a ee.Image - and not a ee.ImageCollection, and add it to the map canvas.

// Filter the collection
var S2_latest = S2.filterBounds(la_palma_roi).sort('system:time_start', false).first();

// Print the image's properties 
print(S2_latest);

// Display the image 
Map.addLayer(S2_latest, S2_vis_RGB, 'S2_latest');

This is cool but there are two problems:

  1. Well... ☁
  2. The image does not cover the island entirely. This is a problem with the swath covered by the satellite pass.

So all in all, more to do!

Finding cloud-free scenes

The first option is to find cloud-free scenes, which can be done using various explorers, such as this one for Sentinel. We search for two images before and after the 2021 eruption, and we identified two dates: 2018-12-20 and 2024-12-18. Let's search for them using GEE.

// Before image
var S2_before = S2_col
  .filterBounds(la_palma_roi)
  .filterDate('2018-12-19', '2018-12-21');

// After image
var S2_after = S2_col
  .filterBounds(la_palma_roi)
  .filterDate('2024-12-17', '2024-12-19');

// Display the image 
Map.addLayer(S2_before, S2_vis_RGB, 'S2_before');
Map.addLayer(S2_after, S2_vis_RGB, 'S2_after');

Alright, we can now visualise and get a first-order estimate of the direct impact of the lava flow. You will observe some different shadings in the valley, which are due to varying atmospheric conditions at acquisition.

Cloud-free composite

It can be tricky to find cloud-free scenes in some regions of the world, in which case it becomes necessary to create cloud-free composite. We start back from the S2_col collection and we apply the following steps:

  1. Filter the collection spatially and temporally with filterBounds and filterDate;
  2. Filter the collection by month to keep images that were acquired around the same day-of-year over multiple years by using Filter.CalendarRange;
  3. Filter the collection to keep only those with a minimum cloud cover (here 10%) by using Filter.lt (i.e., less than) on the metadata 'CLOUDY_PIXEL_PERCENTAGE';
  4. Define a custom function maskS2clouds to mask the clouds in each image of the collection using map;
  5. Take the median of each pixel across the collection, which returns a ee.Image.

To understand the maskS2clouds function, we need again to have a look at the band description of the Sentinel 2 dataset, where we see a SCL (Scene Classification Map) band. Looking at the Sentinel 2 manual, we see that this band classifies each pixel in a scene. maskS2clouds literally builds a mask to an image following these rules:

  • scl.eq(3): Remove cloud shadows
  • .or(scl.gte(7).and(scl.lte(10))): Removes classes 7-10, which include pixels classified as Unclassified, Cloud medium probability, Cloud high probability, Thin cirrus
  • image.updateMask(mask.not()); hides those pixels from the image and returned the masked image
// Function to mask clouds based on the SCL band
function maskS2clouds(image) {
    var scl = image.select('SCL'); // Select the Scene Classification Layer (SCL)
    var mask = scl.eq(3)
        .or(scl.gte(7).and(scl.lte(10))); // Create a mask based on SCL values
    return image.updateMask(mask.not()); // Apply the mask to the image
}


// Create a cloud-free composite by filtering, masking, and compositing the images
var S2_composite = S2_col
    .filterBounds(la_palma_roi) // Spatial filtering
    .filterDate('2022-05-01', '2024-08-31') // Temporal filtering
    .filter(ee.Filter.calendarRange(6, 8, 'month')) // Seasonal filtering (June to August)
    .filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 10)) // Cloud cover filtering
    .map(maskS2clouds) // Apply cloud masking function
    .median(); // Composite by taking the median value for each pixel across the collection

// Add the composite to the map
Map.addLayer(S2_composite, S2_vis_RGB, 'S2_composite');

The resulting image shows some artefacts, but it is often the best we can do.

Exploring band combinations

Until now, we have used bands captured in the visible part of the electromagnetic spectrum and mapped them to the RGB channels of the displayed image. Let's try to use different band combinations, not necessarily captured in the visible spectrum.

Geology

Geological applications tend to use the shortwave infrared (SWIR) bands to reveal geological features (e.g., faults, lithology, and geological formations). Sentinel-2 captures two bands in the SWIR part of the spectrum, B11 and B12. We complement it with the B2 blue band. For this, we only need to modify the visualisation dictionary.

Sentinel 2 band Image channel
B12 Red
B11 Green
B2 Blue
// Define the visualisation 
var S2_vis_geol = {
    bands: ['B12', 'B11', 'B2'],
    min: 0,
    max: 3000,
    gamma: 1.2 
};

Map.addLayer(S2_after, S2_vis_geol, 'S2_after_geol');

This reveals pretty well the youngest lava flows along the Cumbre Vieja ridge to the south, which is the currently active part of the island!

NDVI

Another common application of multispectral imagery are spectral indices, the most frequently used of which is the Normalised Difference Vegetation Index - or NDVI. The idea is the following:

  • Vegetation absorbs red (and, intuitively, reflects green!)
  • Vegetation also reflects infrared, which is not visible to the human eye.
  • Defining an index using the red and infrared bands will result in a proxy for vegetation "health".

The NDVI is defined as:

\[ \mathrm{NDVI} = \frac{\mathrm{NIR} - \mathrm{Red}}{\mathrm{NIR} + \mathrm{Red}} \]

where \(\mathrm{NIR}\) is the reflectance in the near-infrared band (B8 in Sentinel 2) and \(\mathrm{Red}\) is the reflectance in the red band (B2 in Sentinel 2). GEE has the function normalisedDifference to compute normalised differences between two bands.

Here, just as for the SRTM, the resulting image contains a single band. We define a visualisation dictionary that maps a colormap blue→white→green in the range [-0.9, 0.9].

var S2_vis_ndvi = {
  bands: ['NDVI'],
  min: -0.9,
  max: 0.9,
  palette: ['blue', 'white', 'green']
};

var S2_after_NDVI = S2_after.mosaic().normalizedDifference(['B8', 'B4']).rename('NDVI')

Map.addLayer(S2_after_NDVI, S2_vis_ndvi, 'S2_after_NDVI')

GEE resources