Ground Truth Verification
Definition
Ground truth verification is the process of validating remotely sensed or modeled information by comparing it with observations collected directly in the field. In spatial science it closes the loop between pixels, predictions, and reality. Surveyors record positions, attributes, and photographs at representative sites, then use those reference points to quantify accuracy, bias, and uncertainty in maps or machine learning outputs. Good campaigns are stratified by land cover, elevation, season, and accessibility so that validation reflects the diversity of conditions present in the study area. Documentation matters as much as measurement. Field protocols define how to label ambiguous sites, how to handle mixed pixels, and how to capture metadata such as time, sky conditions, or sensor settings. The outcome is more than a pass or fail score. It is a set of diagnostics that explain where a product performs well, where it struggles, and how future collections could improve.
Application
Agencies verify land cover classifications before publishing national statistics; farmers validate crop maps produced from satellite time series; disaster teams confirm damage assessments derived from aerial imagery; telecoms confirm predicted signal strength along drive routes. In AI workflows, ground truth feeds training and test sets, supports active learning to prioritize new samples, and enables continuous monitoring of model drift after deployment. When done well, it prevents costly decisions based on attractive but misleading maps.
FAQ
How should sample locations be chosen for an unbiased survey?
Use probability sampling, ideally stratified by relevant gradients such as land cover type and accessibility, then randomize points within strata. This reduces selection bias that occurs when crews only visit easy places along roads.
What accuracy metrics are most informative for classification products?
Confusion matrices with user’s and producer’s accuracy show both commission and omission errors. Kappa or F1 are useful summaries, while class specific precision and recall inform targeted improvements.
How do we validate continuous surfaces rather than classes?
Hold out part of the field measurements, compute residuals, and map them. Report mean error, RMSE, and spatial autocorrelation of residuals to reveal systematic bias such as elevation dependent drift.
What practical steps keep field truth data reliable over time?
Use repeatable forms, capture photos and GPS tracks, store provenance and versions, and audit entries with peer review. Treat ground truth as a governed dataset, not disposable notes.
SUPPORT
© 2025 GISCARTA