Rotating/Shifting and Exporting Raster Data

Matthias Sammer
5 min readDec 23, 2022

--

Recently I ran into problems with rotated and shifted raster data when exporting it to .tif files, despite using the correct coordinate reference system (CRS) and affine transformation. The problem occurred with rasterio and rioxarray (both Python libraries)

I started with a series of clipped images from Google Dynamics:

Image 1: Google dynamic world data

I analyzed how pixel values changed and translated this information into change maps. But when exporting the data to a .tif files, it always ended up rotated and mirrored images (image 2 below).

Image 2: Change map

Background and Setup

I queried some data from Google Dynamics and clipped an AOI (image 1 above) using a geojson and resterio’s clipping method. I was interested in if and how label classes changed over time (each pixel of an image is associated with a class). For simplicity, I focused only on one class, tracking changes over four distinct time periods.

  • First, I created a 3-dimensional array with the z-axis representing the time.
  • Second, I extracted all 1-dimensional arrays along the z-axis, such that each element of the array represents pixel values across time.
  • Third, I analyzed all arrays, looking for relevant changes.
  • And lastly fourth, I stored all results in an empty data array with the same shape as the initial time series.

Using rasterio’s built-in show function yields a very twisted image of the data array:

from rasterio.plot import show
show(output)

I thought simply transposing and flipping the array will give me the desired result.

a = output.transpose(0, 2, 1)
b = np.flip(a, axis=1)
show(b)

I used the following method to export the array to a .tif file.

import rioxarray
import xarray as xr

# load source image for reference
src_data = rasterio.open("data_\\source_tif.tif", "r")

# Get CRS from your source image
crs = src_data.crs

# Get Affine form source image
transform = src_data.meta['transform']

# Transform numpy array to xarray
output = xr.DataArray(output)

# Affiliate CRS with xarry
output = output.rio.write_crs(crs)

# Renaming dimensions
output = output.rename({'dim_0':'z','dim_1': 'y','dim_2': 'x'})

# Export data to tif file
output.rio.to_raster('output.tif')

I thought this would work, but what I ended up with was this:

Image 3: Stacked data — dynamic world and change map

So, what is going on?

I would have expected that both images overlap perfectly, with the grey image covering the Google Dynamics image almost entirely. I suspect that the reference systems somehow get mixed up during the export process.

Affine Transformation

After several hours of tedious research, bug fixing and blunt trial and error, I figure out the problem: affine transformation

An affine transformation is a geometric transformation that scales, rotates, skews, and/or translates images or coordinates between any two Euclidean spaces. It is used to define the relationship between pixels and an image coordinate system for interior orientation or other geometric transformations.

Remember that we derived both CRS and affine transformation parameters from the Google Dynamics reference data.

When calling transform, we get an array of six parameters: a, b, c, d, e, f

transform
Affine(10.0, 0.0, 543350.0, 0.0, 10.0, 5337340.0)

a and e represent the resolution of the underlying data. Since Google Dynamics is based on Sentinel 2 data, this makes sense.

b, c, d, and f define the corner points of a rectangle. In this case, 543350.0 and 5337340.0 define the lower left comer.

So each pixel of the image is mapped on the coordinate reference system, starting at the lower left corner in steps of 10 (meters per pixel).

Let’s illustrate this, by first defining three additional transformers (note that the only difference between transform and trandsform_2 is the negative sign at position e):

transform_2 = Affine(10.0, 0.0, 543350.0, 0.0, -10.0, 5337340.0)
transform_3 = Affine(-10.0, 0.0, 543350.0, 0.0, -10.0, 5337340.0)
transform_4 = Affine(-10.0, 0.0, 543350.0, 0.0, 10.0, 5337340.0)

Plotting all outputs on top of the initial Google Dynamics data yields:

Image 4: Rotated change maps

Whoops, what happened?

Let’s go through all the new images with transformers 2–4:

transform_2: Image pixels are mapped from the same lower left corner as the initial output, but the resolution value on position e is negative. Thus, pixels on the y- axis are mapped negatively, resulting in a downward mirrored image.

transform_3: Again same mapping origin, but with both resolution values being negative. Thus, pixels on the x-axis and the y-axis are mapped negatively.

transform_4: Same mapping origin again, but this time only the revolutions value a is negative.

Rotate and Shift

Having a basic understanding of affine transformation, the question is how do we get to shift transform_2 such that it overlaps with the Google Dynamics data?

We simply shift the origin of fransform_2 up by the same distance as the image height, factoring in the underlying resolution of 10 meters per pixel.

transform_5 = Affine(10.0, 0.0, 543350.0  , 0.0, -10.0, 5337340.0 + src_data.height * 10)

And this yields….

…..perfectly aligned images

Image 5: Rotated and shifted change map

Note that the grey area does not completely cover the Google Dynamics data, simply because no changes have occurred.

I would like to hear your feedback. Drop me a message in the comment section below, or via LinkedIn.

Stay tuned,

this is Matthias

--

--