On Lack of Robustness in Hydrological Model Development Due to Absence of Guidelines for Selecting Calibration and Evaluation Data: Demonstration for Data-Driven Models

Feifei Zheng, Holger R. Maier, Wenyan Wu, Graeme C. Dandy, Hoshin V. Gupta, Tuqiao Zhang

Research output: Contribution to journalArticlepeer-review

74 Scopus citations

Abstract

Hydrological models are used for a wide variety of engineering purposes, including streamflow forecasting and flood-risk estimation. To develop such models, it is common to allocate the available data to calibration and evaluation data subsets. Surprisingly, the issue of how this allocation can affect model evaluation performance has been largely ignored in the research literature. This paper discusses the evaluation performance bias that can arise from how available data are allocated to calibration and evaluation subsets. As a first step to assessing this issue in a statistically rigorous fashion, we present a comprehensive investigation of the influence of data allocation on the development of data-driven artificial neural network (ANN) models of streamflow. Four well-known formal data splitting methods are applied to 754 catchments from Australia and the U.S. to develop 902,483 ANN models. Results clearly show that the choice of the method used for data allocation has a significant impact on model performance, particularly for runoff data that are more highly skewed, highlighting the importance of considering the impact of data splitting when developing hydrological models. The statistical behavior of the data splitting methods investigated is discussed and guidance is offered on the selection of the most appropriate data splitting methods to achieve representative evaluation performance for streamflow data with different statistical properties. Although our results are obtained for data-driven models, they highlight the fact that this issue is likely to have a significant impact on all types of hydrological models, especially conceptual rainfall-runoff models.

Original languageEnglish (US)
Pages (from-to)1013-1030
Number of pages18
JournalWater Resources Research
Volume54
Issue number2
DOIs
StatePublished - Feb 2018
Externally publishedYes

Keywords

  • artificial neural networks (ANNs)
  • calibration and evaluation
  • data allocation
  • data splitting
  • hydrological models
  • model evaluation bias

ASJC Scopus subject areas

  • Water Science and Technology

Fingerprint

Dive into the research topics of 'On Lack of Robustness in Hydrological Model Development Due to Absence of Guidelines for Selecting Calibration and Evaluation Data: Demonstration for Data-Driven Models'. Together they form a unique fingerprint.

Cite this