Model inter-comparison study

From Hydromodel SA Wiki
Revision as of 12:15, 24 November 2023 by Julia Glenday (talk | contribs) (→‎Project Abstract)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This wiki was initially created as part of the Water Research Commission (WRC) project, Critical catchment model inter-comparison and model use guidance development (K5-2927), which ran from 2019 to 2021. The project team members are listed here.

This project:

  • compared the model structure options and capabilities of different commonly used modelling software tools
  • modelled four different case study catchments, including scenarios of change, using this set of tools to explore practical implications of their differences


Project Abstract

Catchment hydrological modelling has become a central component of water resources management in South Africa. As such, there is need for continuous research and capacity building to enable the water sector to take advantage of, and make wise-use of, the diversity of modelling strategies and tools available. To support this, the Critical catchment hydrological model inter-comparison and model use guidance development (K5-2927) project explored the structural options across several commonly used tools: WRSM-Pitman, SPATSIM-Pitman, ACRU4, SWAT2012, and MIKE-SHE. This included initial assessment of the potential impacts of tool differences in common use cases, done though applying all of them to a variety of case studies. A survey of the modelling community confirmed which tools are being used and looked at user experiences. The project highlighted issues needing attention, particularly use of uncertainty and realism assessments regardless of the tools used.

The tools showed high-level similarities in basic capabilities, however there were many notable differences in model spatial units and subsurface layers, flows between layers and units, the scale at which climate inputs are specified, algorithms that calculate flows, and time steps used. These result in differences in what can be explicitly modelled in a given tool. Tool interfaces also differ significantly, influencing set-up efficiency and transparency, the ease with which the range of reasonable parameter values can be explored for calibration and uncertainty analyses, and the ease with which different water balance outputs can be obtained to assess model realism. Each tool had some type of advantage over others. To assist users in weighting what is important for their use cases, tool capabilities and options were summarised side-by-side on a modelling guidance ‘wiki’ website created in the project. A key finding of the case study modelling exercise was that, while models reaching acceptable performance against observed streamflow data could be built with any of the tools, these models could be predicting quite different balances of processes. This was evident when extracting detailed water balance outputs. Differences in baseline process representation resulted in differing predicted yield changes when the models were applied to scenarios (by up to 20% in the most extreme example). This highlights the need to assess model water balances and find ways to address the challenges in doing so: some tool interfaces make this very time consuming and there is often a lack of auxiliary data for comparison.

Download full report

Follow-on project on modelling uncertainty and the "model-a-thon" activity

The model inter-comparison study highlighted that there can be many justifiable ways of setting up a model of a given catchment with a given dataset; even within a single modelling tool, the modeller must make many subjective decisions. The case study modelling also demonstrated that models with different structures can be applied to the same catchment area and achieve satisfactory calibration (can recreate observed streamflow reasonably well), but then the models can produce very different predictions of the hydrological impacts of a change scenario. These differences arose because the various models were representing the mix of internal catchment processes, storages and fluxes differently (e.g. canopy interception evaporation, evapotranspiration from soil and groundwater, surface runoff generation, soil water storage and interflow, groundwater storage and outflow). If the speeds of various flow pathways and their connections differ across models, different mixtures of these flows can produce similar baseline streamflow predictions to one another in calibration. They can reach the same answer for different reasons. However, these differences become salient when the models are applied to an alternative scenario, such as a change in land cover or climate, and then the predictions across these differently structured models diverge. If we cannot actually tell which of these model representations of the catchment is more realistic, then the range of predictions across all of these models is an indication of the uncertainty in prediction of the scenario impact. This is uncertainty due to uncertainty about the model's structure. There will also be uncertainty in predictions due to uncertainties in the input data and parameter values used.

This led the project team to do a follow on project focused specifically on modelling uncertainty and modeller decision making: "Modelling uncertainty and reliability for water resource assessment in South Africa" (WRC project C2022/2023-00967, 2022-2023). A key activity in this project was a 'model-a-thon' activity in which participants modelled the same catchment, using the same input data and same calibration streamflow data, using a modelling tool of their choice. They then modelled this catchment under a given land cover scenario. The activity clearly illustrated that different users of the same tool can make different choices in the modelling process that can lead to different predictions. It was also demonstrated that when multiple criteria for model acceptance, targeting high and low flow matching, and reality-checks on model process representation, are applied to narrow down the number of baseline models considered satisfactory, that the uncertainty in scenario impact predictions decreases notably. This requires having additional data about internal catchment processes, such as evapotranspiration or groundwater contributions, that can be used to refine model calibration and selection.