Model inter-comparison study
This wiki was initially created as part of the Water Research Commission (WRC) project, Critical catchment model inter-comparison and model use guidance development (K5-2927), which ran from 2019 to 2021. This project included comparing the model structure options and capabilities of different commonly used modelling software tools, but also included modelling four different case study catchments using these tools.
Abstract
Catchment hydrological modelling has become a central component of water resources management in South Africa. As such, there is need for continuous research and capacity building to enable the water sector to take advantage of, and make wise-use of, the diversity of modelling strategies and tools available. To support this, the Critical catchment hydrological model inter-comparison and model use guidance development (K5-2927) project explored the structural options across several commonly used tools: WRSM-Pitman, SPATSIM-Pitman, ACRU4, SWAT2012, and MIKE-SHE. This included initial assessment of the potential impacts of tool differences in common use cases, done though applying all of them to a variety of case studies. A survey of the modelling community confirmed which tools are being used and looked at user experiences. The project highlighted issues needing attention, particularly use of uncertainty and realism assessments regardless of the tools used.
The tools showed high-level similarities in basic capabilities, however there were many notable differences in model spatial units and subsurface layers, flows between layers and units, the scale at which climate inputs are specified, algorithms that calculate flows, and time steps used. These result in differences in what can be explicitly modelled in a given tool. Tool interfaces also differ significantly, influencing set-up efficiency and transparency, the ease with which the range of reasonable parameter values can be explored for calibration and uncertainty analyses, and the ease with which different water balance outputs can be obtained to assess model realism. Each tool had some type of advantage over others. To assist users in weighting what is important for their use cases, tool capabilities and options were summarised side-by-side on a modelling guidance ‘wiki’ website created in the project. A key finding of the case study modelling exercise was that, while models reaching acceptable performance against observed streamflow data could be built with any of the tools, these models could be predicting quite different balances of processes. This was evident when extracting detailed water balance outputs. Differences in baseline process representation resulted in differing predicted yield changes when the models were applied to scenarios (by up to 20% in the most extreme example). This highlights the need to assess model water balances and find ways to address the challenges in doing so: some tool interfaces make this very time consuming and there is often a lack of auxiliary data for comparison.
Follow-on project on modelling uncertainty and the "model-a-thon" activity
This helped the project team get a better operational understanding of the tools and their differences. It also highlighted the fact that there can be many ways of setting up a model of a given catchment: even within a single modelling tool and with a given set of input and calibration data, the modeller must make many subjective decisions. The case studies also demonstrated th