Modelling tool user interfaces
The tables below compare some main features of the user interfaces of the selected modelling tools that relate to their ease of use. These include approximate comparisons of typical model run times and the computing power needed to run them, as well as how easy it is to export and view various model outputs and test different parameter value options for sensitivity analyses and/or calibration.
The combination of how long it takes to set up a model (including preparing input data in the needed format, setting up the structure, entering the parameter values), how long it takes for the model to run, how long it takes to access model outputs of interest, and how long it takes to test and refine the model influences what can be achieved in the time that is available for a modelling project. Some modelling tools may run very quickly, but take a relatively long time to set up and don't have an efficient way to change and test multiple parameter value options, which makes calibration a time consuming, manual process. Other tools may take long to run, but can be set up to do a number of parameter testing runs and even scenario runs at once, allowing the modeller to attend to other work in the meantime (however they may have to do so on another computer if the model requires a lot of computing power!).
Interface comparison overview
Interface characteristic | WRSM-Pitman | SPATSIM-Pitman | ACRU4 | SWAT2012 | MIKE-SHE |
---|---|---|---|---|---|
Graphical user interface (vs code prompt) |
yes | yes | yes | yes | yes |
Catchment map display (visualise linkages) | no | yes | no | yes | yes |
Model run times | |||||
Estimated model run time for a 30 year run, ~300km^2 catchment (Note: will depend on model set-up complexity & computing power!) |
seconds to minutes | seconds to minutes | seconds to minutes | tens of minutes | hours |
Computing resources needed | |||||
Comparative rating of computing power needed to achieve workable run times. | light | light | light | medium | intensive (need good GPU) |
Model set-up ease & efficiency | |||||
Automated creation of model units & connections from map inputs (vs fully manual creation) |
no | no | no | yes | yes |
Input parameter values and change values for batches of models units (e.g., all HRUs of a cover type) |
(limited) | yes | no | yes | yes |
In-built database of suggested parameter values (e.g., for common vegetation types, soil types, etc.) |
no | no | yes | yes | no |
User can build own parameter databases for use across multiple models | no | (limited) | no | yes | yes |
Model set-up transparency (i.e., is it very obvious what the model is doing/assuming?) | |||||
Interface makes the user interact with every component & parameter entry option during model set-up (vs having default parameter values pre-entered & not forcing user to view them) |
yes | yes | yes | no | yes |
Tool checks connection errors | (limited) | yes | (limited) | yes | yes |
Batch runs & calibration tools | |||||
Facility for batch runs, parameter sensitivity analyses, uncertainty analyses & auto-calibration | no | yes | no | yes | yes |
Accessing model output | |||||
Output viewer tool for streamflow | yes | yes | yes | yes | yes |
Output viewer tool for water balance fluxes and stores | (limited) | yes | no | (limited) | yes |
All water balance components that are calculated by the model can be exported | no | no | yes | yes | yes |
Batch export of water balance fluxes for model's basic spatial units | no | yes | yes | yes | yes |
Automated extraction of water balance fluxes for different spatial scales (e.g., by cover class area, by subcatchment, full catchment) |
no | no | no | (limited) | yes |
Formats of input and output data
The table below gives some basic information about the file formats used for model inputs and outputs across the different modelling tools to give a general impression of what is required to work with them. This is a very rough overview and one has to work with user manuals, tutorials, and/or pre-exist demonstration models and data to understand the various formatting requirements and file types used across the inputs and outputs of a specific software tool.
For large or complex model set-ups that will have many different inputs (e.g., different input rainfall timeseries for several different points across the modelled area), it is highly recommended to use coding tools like R or Python to prepare the input files as it will be time-consuming to get many files into the same specific formatting required by the modelling software and most do not have in-built conversion tools.
Data type | WRSM-Pitman | SPATSIM-Pitman | ACRU4 | SWAT2012 | MIKE-SHE |
---|---|---|---|---|---|
Timeseries data | Specially formatted text files (special file extensions)
|
Specially formatted text files (.txt)
|
Specially formatted ASCII text files (.txt) & .DBF files
|
Specially formatted text files (.txt) & Access database files
|
Software-specific .dfs0 file format
|
Spatial data | N/A (Spatial data is not directly input into the software)
|
N/A (Spatial data is not directly input into the software)
|
N/A (Spatial data is not directly input into the software)
|
Standard GIS shapefile and grid/raster files (geotif, grid) used.
|
Software-specific .dfs2 and .dfs3 file formats used in general. A few inputs allow standard shapefiles.
|
User impressions of 'ease-of-use' (modeller survey)
A brief survey for hydrological modellers was distributed via the South African Hydrology Society (SAHS) as part of the model intercomparison project (2019-2021). Participants were asked to rank the ease-of-use of the software interface for any modelling tools they were familiar with on a scale of 1-5 in which: 1=poor, 3=satisfactory, 5= excellent
There was a very wide range of scores assigned for each tool across the respondents, showing that different people experience the tools differently!
Both the average and the range of scores assigned are presented below
Survey data | WRSM-Pitman | SPATSIM-Pitman | ACRU4 | SWAT2012 | MIKE-SHE |
---|---|---|---|---|---|
Number of users answering survey | 13 | 14 | 19 | 9 | 8 |
Average ease-of-use score 1-poor to 5-excellent |
3.8 | 3.4 | 3.4 | 3.9 | 3.0 |
Range of scores assigned 1-poor to 5-excellent |
2 - 5 | 2 - 5 | 1 - 5 | 3 - 5 | 1 - 5 |
User ratings across tools
In 2021, we surveyed the South African hydrological modelling community to ask them about their modelling background and level, which tools they used, and what their perceptions about these tools were. Specifically we asked them to rate the ease-of-use of the user interface, the ease-of-use of the documentation as well as the support of each modelling tool on a scale of 1-5, 1 being poor, 3 being satisfactory, and 5 being excellent. On 31 May 2021 we had 45 responses, and we summarised results here for any modelling tools that were reviewed by more than two people (i.e. sample size greater than 2). If you are choosing a modelling tool for your project, perhaps this table, as well as those on capabilities and specific use cases, would help you make a decision on which to select.
Modelling tool | Interface | Documentation | Support | Sample Size |
---|---|---|---|---|
ACRU | 3.4 | 3.6 | 3.9 | 19 |
WRSM-Pitman | 3.6 | 3.5 | 3.5 | 14 |
SPATSIM-Pitman | 3.3 | 3.3 | 3.5 | 11 |
SWAT | 3.6 | 3.9 | 3.8 | 9 |
MIKE-SHE | 3.0 | 2.1 | 2.3 | 7 |