Probabilistic Mapping and Spatial Pattern Analysis of Grazing Lawns in Southern African Savannahs Using WorldView-3 Imagery and Machine Learning Techniques

Kwame T. Awuah, Paul Aplin, Christopher G. Marston, Ian Powell, Izak P. J. Smit

Research output: Contribution to journalArticle (journal)peer-review

6 Citations (Scopus)
78 Downloads (Pure)


Savannah grazing lawns are a key food resource for large herbivores such as blue
wildebeest (Connochaetes taurinus), hippopotamus (Hippopotamus amphibius) and white rhino
(Ceratotherium simum), and impact herbivore densities, movement and recruitment rates. They also
exert a strong influence on fire behaviour including frequency, intensity and spread. Thus, variation
in grazing lawn cover can have a profound impact on broader savannah ecosystem dynamics.
However, knowledge of their present cover and distribution is limited. Importantly, we lack a
robust, broad-scale approach for detecting and monitoring grazing lawns, which is critical to
enhancing understanding of the ecology of these vital grassland systems. We selected two sites in
the Lower Sabie and Satara regions of Kruger National Park, South Africa with mesic and semiarid
conditions, respectively. Using spectral and texture features derived from WorldView-3 imagery,
we (i) parameterised and assessed the quality of Random Forest (RF), Support Vector Machines (SVM),
Classification and Regression Trees (CART) and Multilayer Perceptron (MLP) models for general
discrimination of plant functional types (PFTs) within a sub-area of the Lower Sabie landscape, and (ii)
compared model performance for probabilistic mapping of grazing lawns in the broader Lower Sabie
and Satara landscapes. Further, we used spatial metrics to analyse spatial patterns in grazing lawn
distribution in both landscapes along a gradient of distance from waterbodies. All machine learning
models achieved high F-scores (F1) and overall accuracy (OA) scores in general savannah PFTs
classification, with RF (F1 = 95.73 0.004%, OA = 94.16 0.004%), SVM (F1 = 95.64 0.002%,
OA = 94.02 0.002% ) and MLP (F1 = 95.71 0.003%, OA = 94.27 0.003% ) forming a cluster
of the better performing models and marginally outperforming CART (F1 = 92.74 0.006%,
OA = 90.93 0.003%). Grazing lawn detection accuracy followed a similar trend within the Lower
Sabie landscape, with RF, SVM, MLP and CART achieving F-scores of 0.89, 0.93, 0.94 and 0.81,
respectively. Transferring models to the Satara landscape however resulted in relatively lower
but high grazing lawn detection accuracies across models (RF = 0.87, SVM = 0.88, MLP = 0.85
and CART = 0.75). Results from spatial pattern analysis revealed a relatively higher proportion of
grazing lawn cover under semiarid savannah conditions (Satara) compared to the mesic savannah
landscape (Lower Sabie). Additionally, the results show strong negative correlation between grazing
lawn spatial structure (fractional cover, patch size and connectivity) and distance from waterbodies,
with larger and contiguous grazing lawn patches occurring in close proximity to waterbodies in both
landscapes. The proposed machine learning approach provides a novel and robust workflow for
accurate and consistent landscape-scale monitoring of grazing lawns, while our findings and research
outputs provide timely information critical for understanding habitat heterogeneity in southern
African savannahs.
Original languageEnglish
Article number3357
Pages (from-to)1-37
Number of pages37
JournalRemote Sensing
Issue number20
Early online date15 Oct 2020
Publication statusPublished - 15 Oct 2020


  • African savannah
  • grazing lawns
  • machine learning
  • WorldView-3
  • Support Vector Machines
  • Random Forest
  • Multilayer Perceptron
  • decision trees
  • spatial analysis
  • Spatial analysis
  • Grazing lawns
  • Machine learning
  • Decision trees


Dive into the research topics of 'Probabilistic Mapping and Spatial Pattern Analysis of Grazing Lawns in Southern African Savannahs Using WorldView-3 Imagery and Machine Learning Techniques'. Together they form a unique fingerprint.

Cite this