Collection of validation data in the context of remote sensing based forest monitoring

Tutorial for the EON Summer School 2025

Author

Paul Magdon[University of Applied Sciences and Arts (HAWK), paul.magdon@hawk.de]

Published

August 26, 2025

rm(list=ls())
library(sf)
Linking to GEOS 3.13.1, GDAL 3.11.0, PROJ 9.6.0; sf_use_s2() is TRUE
library(terra)
terra 1.8.60
library(ggplot2)
library(rprojroot)
library(patchwork)

Attache Paket: 'patchwork'
Das folgende Objekt ist maskiert 'package:terra':

    area
data_dir=paste0(find_rstudio_root_file(),"/block1_magdon/data/")

Introduction

In this tutorial we will explore the principles of design-based sampling. The simulation part is based on a presentation of Gerad Heuveling from Wageningen University, which he gave in the OpenGeoHub Summer School[https://opengeohub.org/summer-school/ogh-summer-school-2021/].

  1. Learn how to draw a spatial random sample
  2. Learn how to draw a systematic grid for a given area of interest
  3. Run a simulation for design-based sampling

Data sets

For demonstration purposes we will work with a map of forest above ground biomass (AGB) produced by the Joint Research Center(JRC) for the European Union European Commission (Joint Research Centre (JRC) (2020) http://data.europa.eu/89h/d1fdf7aa-df33-49af-b7d5-40d226ec0da3.)

To provide a synthetic example we will assume that this map (agb_pop) is an error free representation of the population. Additionally we use a second map (agb_model) compiled using a machine learning model (RF) also depicting the AGB distribution.

Before we can start you may download the following files:

  1. AGB_pop
  2. AGB_model
  3. National park boundary

Place all files into the sub folder ‘data’ of this tutorial.

#Import the boundary of the national park
np_boundary = st_transform(st_read(paste0(data_dir,"nlp-harz_aussengrenze.gpkg")),25832)
Reading layer `nlp-harz_aussengrenze' from data source 
  `D:\EON2025\block1_magdon\data\nlp-harz_aussengrenze.gpkg' 
  using driver `GPKG'
Simple feature collection with 1 feature and 3 fields
Geometry type: MULTIPOLYGON
Dimension:     XY
Bounding box:  xmin: 591196.6 ymin: 5725081 xmax: 619212.6 ymax: 5751232
Projected CRS: WGS 84 / UTM zone 32N
agb_pop <- terra::rast(paste0(data_dir,"agb_np_harz_truth.tif"))

agb_model <-terra::rast(paste0(data_dir,"agb_np_harz_model.tif"))

If we assume the \(z(x_i)=\) agb.pop to be an exact representation of the population we can calculate the Root mean Square Error (RMSE) as the difference between the model predictions \(\hat{z(x_i)}\) and the population map with:

\[ RMSE = \sqrt{\frac{1}{N}\sum{(z(x_{})-\hat{z}(x_{i}))^2}} \]

RMSE_pop = as.numeric(sqrt(terra::global((agb_pop-agb_model)^2,fun='mean',na.rm=TRUE)))

By looking at the difference from the “true” AGB and the difference we get a true RMSE of 41.23 t/ha.

Collect a random sample

Since we know the true RMSE, we can test if a random sample estimate has a similar RMSE. We start with a random sample with \(n=100\) sample points.

n=100

p1 = sf::st_sample(np_boundary,size=n)
ggplot()+geom_sf(data=np_boundary,fill=NA)+
  geom_sf(data=p1)

We can now extract the population values and the model values at the sample locations and calculate the RMSE for all sample points.

sample <- terra::extract((agb_pop-agb_model),vect(p1))
names(sample)<-c('ID','Diff')
RMSE_est <- sqrt(mean((sample$Diff)^2,na.rm=T))

The random sample estimates the RMSE as 37.91.

But is this an unbiased estimate?

Simulation of many random samples

To check if our sample based estimates are unbiased we will repeat the sampling \(k\) times.

[1] 1
[1] 2
[1] 3
[1] 4
[1] 5
[1] 6
[1] 7
[1] 8
[1] 9
[1] 10
[1] 11
[1] 12
[1] 13
[1] 14
[1] 15
[1] 16
[1] 17
[1] 18
[1] 19
[1] 20
[1] 21
[1] 22
[1] 23
[1] 24
[1] 25
[1] 26
[1] 27
[1] 28
[1] 29
[1] 30
[1] 31
[1] 32
[1] 33
[1] 34
[1] 35
[1] 36
[1] 37
[1] 38
[1] 39
[1] 40
[1] 41
[1] 42
[1] 43
[1] 44
[1] 45
[1] 46
[1] 47
[1] 48
[1] 49
[1] 50
[1] 51
[1] 52
[1] 53
[1] 54
[1] 55
[1] 56
[1] 57
[1] 58
[1] 59
[1] 60
[1] 61
[1] 62
[1] 63
[1] 64
[1] 65
[1] 66
[1] 67
[1] 68
[1] 69
[1] 70
[1] 71
[1] 72
[1] 73
[1] 74
[1] 75
[1] 76
[1] 77
[1] 78
[1] 79
[1] 80
[1] 81
[1] 82
[1] 83
[1] 84
[1] 85
[1] 86
[1] 87
[1] 88
[1] 89
[1] 90
[1] 91
[1] 92
[1] 93
[1] 94
[1] 95
[1] 96
[1] 97
[1] 98
[1] 99
[1] 100

We see that the true RMSE and the mean of the \(k\) simulation runs are almost equal. Thus, we can assume an unbiased estimate of the RMSE.

But how does the sample size \(n\) affects the accuracy?

[1] 1
[1] 2
[1] 3
[1] 4
[1] 5
[1] 6
[1] 7
[1] 8
[1] 9
[1] 10
[1] 11
[1] 12
[1] 13
[1] 14
[1] 15
[1] 16
[1] 17
[1] 18
[1] 19
[1] 20
[1] 21
[1] 22
[1] 23
[1] 24
[1] 25
[1] 26
[1] 27
[1] 28
[1] 29
[1] 30
[1] 31
[1] 32
[1] 33
[1] 34
[1] 35
[1] 36
[1] 37
[1] 38
[1] 39
[1] 40
[1] 41
[1] 42
[1] 43
[1] 44
[1] 45
[1] 46
[1] 47
[1] 48
[1] 49
[1] 50
[1] 51
[1] 52
[1] 53
[1] 54
[1] 55
[1] 56
[1] 57
[1] 58
[1] 59
[1] 60
[1] 61
[1] 62
[1] 63
[1] 64
[1] 65
[1] 66
[1] 67
[1] 68
[1] 69
[1] 70
[1] 71
[1] 72
[1] 73
[1] 74
[1] 75
[1] 76
[1] 77
[1] 78
[1] 79
[1] 80
[1] 81
[1] 82
[1] 83
[1] 84
[1] 85
[1] 86
[1] 87
[1] 88
[1] 89
[1] 90
[1] 91
[1] 92
[1] 93
[1] 94
[1] 95
[1] 96
[1] 97
[1] 98
[1] 99
[1] 100

We see that the precision of the estimates is increased. How much did the uncertainty decrease when we increase the sample size from \(n=50\) to \(n=100\)?

sd(RMSE_2)/sd(RMSE)
[1] 0.8971176

Systematic sampling

Instead of a random sampling, systematic designs are more common in forest inventories for the following reasons:

  • Easy to establish and to document
  • Ensures a balanced spatial coverage
p1 = sf::st_sample(np_boundary,size=n,type='regular')

ggplot()+geom_sf(data=np_boundary,fill=NA)+
  geom_sf(data=p1)

k <- 100
n <- 100
RMSE_3 <- rep(0,k) 

for (i in 1:k) {
  print(i)
  p1 = sf::st_sample(np_boundary,size=n,type='regular')
  error<- terra::extract(dif,vect(p1))
  RMSE_3[i] <- sqrt(mean((error$dif)^2,na.rm=T))
}
[1] 1
[1] 2
[1] 3
[1] 4
[1] 5
[1] 6
[1] 7
[1] 8
[1] 9
[1] 10
[1] 11
[1] 12
[1] 13
[1] 14
[1] 15
[1] 16
[1] 17
[1] 18
[1] 19
[1] 20
[1] 21
[1] 22
[1] 23
[1] 24
[1] 25
[1] 26
[1] 27
[1] 28
[1] 29
[1] 30
[1] 31
[1] 32
[1] 33
[1] 34
[1] 35
[1] 36
[1] 37
[1] 38
[1] 39
[1] 40
[1] 41
[1] 42
[1] 43
[1] 44
[1] 45
[1] 46
[1] 47
[1] 48
[1] 49
[1] 50
[1] 51
[1] 52
[1] 53
[1] 54
[1] 55
[1] 56
[1] 57
[1] 58
[1] 59
[1] 60
[1] 61
[1] 62
[1] 63
[1] 64
[1] 65
[1] 66
[1] 67
[1] 68
[1] 69
[1] 70
[1] 71
[1] 72
[1] 73
[1] 74
[1] 75
[1] 76
[1] 77
[1] 78
[1] 79
[1] 80
[1] 81
[1] 82
[1] 83
[1] 84
[1] 85
[1] 86
[1] 87
[1] 88
[1] 89
[1] 90
[1] 91
[1] 92
[1] 93
[1] 94
[1] 95
[1] 96
[1] 97
[1] 98
[1] 99
[1] 100
df_3<- data.frame(x=RMSE_3, y=rep('c',k))
df<-rbind(df,df_3)

ggplot(data=df,aes(x=x, fill=y))+
  geom_density(alpha=0.5)+
  scale_fill_discrete(labels=c('Random, n=50', 'Random, n=100','Systematic, n=100'))+
  xlab('RMSE (t/ha)')+geom_vline(xintercept=RMSE_pop,linewidth=1.5,
                          color ='black', linetype='longdash')+
  geom_vline(xintercept=mean(df$x),linewidth=1.5,
                       color ='black')

Evaluating the AGB-Model

Systematic sample to collect reference data for map validation

To validate the map we use a systematic sample grid. In a real world application we do not know the true population values. Therefore, field work would be needed to collect reference data at the selected sample points. In this workshop we assume that the agp_pop map represents the true value without any errors. Thus, we don’t need to go to field but we can sample the data by extracting the true values from the map at the sample locations.

# we will use n=100 sample plots
n=100
p1 = sf::st_sample(np_boundary,size=n,type='regular')

ggplot()+geom_sf(data=np_boundary,fill=NA)+
  geom_sf(data=p1)

At each sample point we extract the predicted and observed AGB value.

obs <- terra::extract(agb_pop,vect(p1))
names(obs)<-c('ID','obs')

pred <- terra::extract(agb_model,vect(p1))
names(pred)<-c('ID','pred')
validation<-data.frame(observed=obs$obs, predicted=pred$pred)

# we need to remove the na values from this dataframe. In real world applications
# such NA values can,  occur for example at inaccessible field plots.

validation<-validation[complete.cases(validation),]

Assessment of the ABG-model performance

ggplot(data=validation,aes(x=observed, y=predicted))+
  geom_point(alpha=0.5)+
  xlab('Observed AGB t/ha')+ylab('Predicted AGB t/ha')

Sample RMSE

Again we can use the RMSE to express the mean difference between observed and predicted AGB.

RMSE_sample = sqrt(sum((validation$observed-validation$predicted)^2)/nrow(validation))

The sample RMSE is 38.89* t/ha. To better compare the values between different target variables and models is can also express as a proportion relative to the mean value of the predictions.

rRMSE = RMSE_sample/mean(validation$predicted)

On average we expect that the AGB estimate of our model has an error of 24.1 %.

Error distribution

But is this RMSE valid for the entire range of the observed values or do we expect higher errors for higher AGB values?

To see how the model performs over target value range we can use the following analysis plots.

validation$resid<-validation$observed-validation$predicted

p1<-ggplot(data=validation,aes(x=observed, y=predicted))+
  geom_point(alpha=0.5)+
  xlab('Observed AGB t/ha')+ylab('Predicted AGB t/ha')+
  xlim(0,250)+ylim(0,250)+
  geom_abline(slope=1,intercept = 0)+
  stat_summary(fun.data= mean_cl_normal) + 
  geom_smooth(method='lm')



p2<-ggplot(data=validation,aes(x=observed, y=resid))+
  geom_point(alpha=0.5)+
  xlab('Observed AGB t/ha')+ylab('Residuals')+
  xlim(0,250)+ylim(-50,+50)+
  geom_abline(slope=0,intercept = 1)

p3<-ggplot(data=validation,aes(x=resid))+
  geom_histogram(aes(y=..density..),fill='grey',binwidth=10)+
  xlab('Observed AGB t/ha')+ylab('Density')+
  xlim(-150,150)+
  stat_function(fun = dnorm, geom="polygon",args = list(mean = mean(validation$resid), sd = sd(validation$resid)),color='blue',alpha=0.4,fill='blue')+
  geom_vline(xintercept=0,color='blue')+
  geom_vline(xintercept=mean(validation$resid),color='red')
p1+p2+p3+plot_layout(ncol=3)
Warning: Removed 1 row containing non-finite outside the scale range
(`stat_summary()`).
Warning: Computation failed in `stat_summary()`.
Caused by error in `fun.data()`:
! The package "Hmisc" is required.
`geom_smooth()` using formula = 'y ~ x'
Warning: Removed 1 row containing non-finite outside the scale range
(`stat_smooth()`).
Warning: Removed 1 row containing missing values or values outside the scale range
(`geom_point()`).
Warning: Removed 18 rows containing missing values or values outside the scale range
(`geom_point()`).
Warning: The dot-dot notation (`..density..`) was deprecated in ggplot2 3.4.0.
ℹ Please use `after_stat(density)` instead.
Warning: Removed 2 rows containing missing values or values outside the scale range
(`geom_bar()`).