Difference between revisions of "Modelling tool user interfaces"
Line 186: | Line 186: | ||
== User impressions of 'ease-of-use' (modeller survey) == | == User impressions of 'ease-of-use' (modeller survey) == | ||
A brief survey for hydrological modellers was distributed via the South African Hydrology Society (SAHS) as part of the [[Model inter-comparison study|model intercomparison project (2019-2021)]]. Participants were asked to rank the ease-of-use of the software user interface, it's documentation, and available support for any modelling tools they were familiar with on a scale of 1-5 in which: '''1=poor, 3=satisfactory, 5= excellent'''</br> | A brief survey for hydrological modellers was distributed via the South African Hydrology Society (SAHS) as part of the [[Model inter-comparison study|model intercomparison project (2019-2021)]]. Participants were asked to rank the ease-of-use of the software user interface, it's documentation, and available support for any modelling tools they were familiar with on a scale of 1-5 in which: '''1=poor, 3=satisfactory, 5= excellent'''</br> | ||
− | There was a very wide range of scores assigned for each tool across the respondents, showing that different people experience the tools differently! </br> | + | There was a ''very'' wide range of different scores assigned for each tool across the respondents, with some assigning a poor score while others assign and excellent score to the same tool, showing that different people experience the tools differently! </br> |
− | Both the average and the range of scores assigned are presented below. | + | Both the average and the range of scores assigned are presented below.</br> |
− | (Links to online documentation and support resources for these tools are provided [[Modelling tool documentation|here]]) | + | (Links to online documentation and support resources for these tools are provided [[Modelling tool documentation|here]])</br> |
+ | </br> | ||
+ | ''It should be noted that for the local tools (WRSM, SPATSIM, ACRU), users were generally trained in its use by those developing the tool or those closely involved with it and are often based at institutions with experts in that tool. Support for the local tools usually takes the form of personal interaction. For the international tools (SWAT, MIKE-SHE), some may have taught themselves to use the tool with online resources and/or been trained by an expert based elsewhere. As a result, there likely is more direct engagement with, and reliance on, the written documentation and support provided through global online help forums and emails to more distant expert users when working with these tools. '' | ||
+ | |||
{| class="wikitable" | {| class="wikitable" | ||
− | ! scope="col" style="width:15em" | | + | ! scope="col" style="width:15em" colspan="2"| Aspect of tool <big></big> |
! scope="col" style="background: #F2CEE0; width:8em" |WRSM-Pitman | ! scope="col" style="background: #F2CEE0; width:8em" |WRSM-Pitman | ||
! scope="col" style="background: #F2D4CE; width:8em" |SPATSIM-Pitman | ! scope="col" style="background: #F2D4CE; width:8em" |SPATSIM-Pitman | ||
Line 197: | Line 200: | ||
! scope="col" style="background: #CEE6F2; width:8em" |MIKE-SHE | ! scope="col" style="background: #CEE6F2; width:8em" |MIKE-SHE | ||
|- | |- | ||
− | | style="vertical-align: vertical-align: center;" |''Number of users answering survey'' | + | | style="vertical-align: vertical-align: center;" colspan="2"|'''''Number of users answering survey'''''<big></big> |
| style="background: #FFF5FA; vertical-align: center; text-align:center;" |''13'' | | style="background: #FFF5FA; vertical-align: center; text-align:center;" |''13'' | ||
| style="background: #FFF7F5; vertical-align: center; text-align:center;" |''14'' | | style="background: #FFF7F5; vertical-align: center; text-align:center;" |''14'' | ||
Line 204: | Line 207: | ||
| style="background: #F5FCFF; vertical-align: center; text-align:center;" |''8'' | | style="background: #F5FCFF; vertical-align: center; text-align:center;" |''8'' | ||
|- | |- | ||
− | | style="vertical-align: | + | | rowspan="2"; style="vertical-align: top; width:10em "|'''Interface'''<big></big></br><small> 1-poor to 5-excellent |
− | | style="background: #FFF5FA; vertical-align: center; text-align:center;" |3.8 | + | | style="vertical-align: vertical-align: center;" |''Average score'' <big></big> |
− | | style="background: #FFF7F5; vertical-align: center; text-align:center;" |3.4 | + | | style="background: #FFF5FA; vertical-align: center; text-align:center;" |'''3.8 |
− | | style="background: #F5FFF5; vertical-align: center; text-align:center;" |3.4 | + | | style="background: #FFF7F5; vertical-align: center; text-align:center;" |'''3.4 |
− | | style="background: #FFFFF5; vertical-align: center; text-align:center;" |3.9 | + | | style="background: #F5FFF5; vertical-align: center; text-align:center;" |'''3.4 |
− | | style="background: #F5FCFF; vertical-align: center; text-align:center;" |3.0 | + | | style="background: #FFFFF5; vertical-align: center; text-align:center;" |'''3.9 |
+ | | style="background: #F5FCFF; vertical-align: center; text-align:center;" |'''3.0 | ||
|- | |- | ||
− | | style="vertical-align: vertical-align: center;" | | + | | style="vertical-align: vertical-align: center;" |''Range of scores''<big></big> |
| style="background: #FFF5FA; vertical-align: center; text-align:center;" |2 - 5 | | style="background: #FFF5FA; vertical-align: center; text-align:center;" |2 - 5 | ||
| style="background: #FFF7F5; vertical-align: center; text-align:center;" |2 - 5 | | style="background: #FFF7F5; vertical-align: center; text-align:center;" |2 - 5 | ||
| style="background: #F5FFF5; vertical-align: center; text-align:center;" |1 - 5 | | style="background: #F5FFF5; vertical-align: center; text-align:center;" |1 - 5 | ||
| style="background: #FFFFF5; vertical-align: center; text-align:center;" |3 - 5 | | style="background: #FFFFF5; vertical-align: center; text-align:center;" |3 - 5 | ||
+ | | style="background: #F5FCFF; vertical-align: center; text-align:center;" |1 - 5 | ||
+ | |- | ||
+ | | rowspan="2"; style="vertical-align: top; width:10em "|'''Documentation'''<big></big></br><small> 1-poor to 5-excellent | ||
+ | | style="vertical-align: vertical-align: center;" |''Average score'' <big></big> | ||
+ | | style="background: #FFF5FA; vertical-align: center; text-align:center;" |'''3.7 | ||
+ | | style="background: #FFF7F5; vertical-align: center; text-align:center;" |'''3.6 | ||
+ | | style="background: #F5FFF5; vertical-align: center; text-align:center;" |'''3.6 | ||
+ | | style="background: #FFFFF5; vertical-align: center; text-align:center;" |'''3.6 | ||
+ | | style="background: #F5FCFF; vertical-align: center; text-align:center;" |'''2.3 | ||
+ | |- | ||
+ | | style="vertical-align: vertical-align: center;" |''Range of scores''<big></big> | ||
+ | | style="background: #FFF5FA; vertical-align: center; text-align:center;" |2 - 5 | ||
+ | | style="background: #FFF7F5; vertical-align: center; text-align:center;" |2 - 5 | ||
+ | | style="background: #F5FFF5; vertical-align: center; text-align:center;" |1 - 5 | ||
+ | | style="background: #FFFFF5; vertical-align: center; text-align:center;" |3 - 5 | ||
+ | | style="background: #F5FCFF; vertical-align: center; text-align:center;" |1 - 5 | ||
+ | |- | ||
+ | | rowspan="2"; style="vertical-align: top; width:10em "|'''Support'''<big></big></br><small> 1-poor to 5-excellent | ||
+ | | style="vertical-align: vertical-align: center;" |''Average score'' <big></big> | ||
+ | | style="background: #FFF5FA; vertical-align: center; text-align:center;" |'''3.7 | ||
+ | | style="background: #FFF7F5; vertical-align: center; text-align:center;" |'''3.7 | ||
+ | | style="background: #F5FFF5; vertical-align: center; text-align:center;" |'''3.9 | ||
+ | | style="background: #FFFFF5; vertical-align: center; text-align:center;" |'''3.8 | ||
+ | | style="background: #F5FCFF; vertical-align: center; text-align:center;" |'''2.4 | ||
+ | |- | ||
+ | | style="vertical-align: vertical-align: center;" |''Range of scores''<big></big> | ||
+ | | style="background: #FFF5FA; vertical-align: center; text-align:center;" |1 - 5 | ||
+ | | style="background: #FFF7F5; vertical-align: center; text-align:center;" |2 - 5 | ||
+ | | style="background: #F5FFF5; vertical-align: center; text-align:center;" |1 - 5 | ||
+ | | style="background: #FFFFF5; vertical-align: center; text-align:center;" |2 - 5 | ||
| style="background: #F5FCFF; vertical-align: center; text-align:center;" |1 - 5 | | style="background: #F5FCFF; vertical-align: center; text-align:center;" |1 - 5 | ||
|} | |} |
Latest revision as of 09:44, 4 December 2023
The tables below compare some main features of the user interfaces of the selected modelling tools that relate to their ease of use. These include approximate comparisons of typical model run times and the computing power needed to run them, as well as how easy it is to export and view various model outputs and test different parameter value options for sensitivity analyses and/or calibration. (More specifics about what outputs each tool can produce is covered here.)
What can be achieved in the time available for a modelling project is influenced by the combination of how long it takes to set up a model (including preparing input data in the needed format, setting up the structure, entering the parameter values), how long it takes for the model to run, how long it takes to access model outputs of interest, and how long it takes to test and refine the model. Some modelling tools may run very quickly, but take a relatively long time to set up and don't have an efficient way to change and test multiple parameter value options, which makes calibration a time consuming, laborious, manual process. Other tools may take long to run, but can be set up to do a number of parameter testing runs and even scenario runs at once, allowing the modeller to attend to other work in the meantime (however they may have to do so on another computer if the model requires a lot of computing power!).
Ease and efficiency of software use is also influenced by the documentation and support available for the tool. How accessible and user-friendly these are can determine whether or not users getting stuck trouble-shooting for long periods of time or inadvertently modelling things incorrectly. User impressions of the ease of use of tool interfaces, documentation, and support are provided below. Links to online documentation and support resources for these tools are provided here
Interface comparison
Interface characteristic | WRSM-Pitman | SPATSIM-Pitman | ACRU4 | SWAT2012 | MIKE-SHE |
---|---|---|---|---|---|
Graphical user interface (vs code prompt) |
yes | yes | yes | yes | yes |
Catchment map display (visualise linkages) | no | yes | no | yes | yes |
Model run times | |||||
Estimated model run time for a 30 year run, ~300km^2 catchment (Note: will depend on model set-up complexity & computing power!) |
seconds to minutes | seconds to minutes | seconds to minutes | tens of minutes | hours |
Computing resources needed | |||||
Comparative rating of computing power needed to achieve workable run times. | light | light | light | medium | intensive (need good GPU) |
Model set-up ease & efficiency | |||||
Automated creation of model units & connections from map inputs (vs fully manual creation) |
no | no | no | yes | yes |
Input parameter values and change values for batches of models units (e.g., all HRUs of a cover type) |
(limited) | yes | no | yes | yes |
In-built database of suggested parameter values (e.g., for common vegetation types, soil types, etc.) |
no | no | yes | yes | no |
User can build own parameter databases for use across multiple models | no | (limited) | no | yes | yes |
Model set-up transparency (i.e., is it very obvious what the model is doing/assuming?) | |||||
Interface makes the user interact with every component & parameter entry option during model set-up (vs having default parameter values pre-entered & not forcing user to view them) |
yes | yes | yes | no | yes |
Tool checks connection errors | (limited) | yes | (limited) | yes | yes |
Batch runs & calibration tools | |||||
Facility for batch runs, parameter sensitivity analyses, uncertainty analyses & auto-calibration | no | yes | no | yes | yes |
Accessing model output | |||||
Output viewer tool for streamflow | yes | yes | yes | yes | yes |
Output viewer tool for water balance fluxes and stores | (limited) | yes | no | (limited) | yes |
All water balance components that are calculated by the model can be exported | no | no | yes | yes | yes |
Batch export of water balance fluxes for model's basic spatial units | no | yes | yes | yes | yes |
Automated extraction of water balance fluxes for different spatial scales (e.g., by cover class area, by subcatchment, full catchment) |
no | no | no | (limited) | yes |
Formats of input and output data
The table below gives some basic information about the file formats used for model inputs and outputs across the different modelling tools to give a general impression of what is required to work with them. This is a very rough overview and one has to work with user manuals, tutorials, and/or pre-exist demonstration models and data to understand the various formatting requirements and file types used across the inputs and outputs of a specific software tool.
For large or complex model set-ups that will have many different inputs (e.g., different input rainfall timeseries for several different points across the modelled area), it is highly recommended to use coding tools like R or Python to prepare the input files as it will be time-consuming to get many files into the same specific formatting required by the modelling software and most do not have in-built conversion tools.
Data type | WRSM-Pitman | SPATSIM-Pitman | ACRU4 | SWAT2012 | MIKE-SHE |
---|---|---|---|---|---|
Timeseries data |
Specially formatted text files (special file extensions)
|
Specially formatted text files (.txt)
|
Specially formatted ASCII text files (.txt) & .DBF files
|
Specially formatted text files (.txt) & Access database files
|
Software-specific .dfs0 file format
|
Spatial data |
N/A (Spatial data is not directly input into the software)
|
N/A (Spatial data is not directly input into the software)
|
N/A (Spatial data is not directly input into the software)
|
Standard GIS shapefile and grid/raster files (geotif, grid) used.
|
Software-specific .dfs2 and .dfs3 file formats used in general. A few inputs allow standard shapefiles.
|
User impressions of 'ease-of-use' (modeller survey)
A brief survey for hydrological modellers was distributed via the South African Hydrology Society (SAHS) as part of the model intercomparison project (2019-2021). Participants were asked to rank the ease-of-use of the software user interface, it's documentation, and available support for any modelling tools they were familiar with on a scale of 1-5 in which: 1=poor, 3=satisfactory, 5= excellent
There was a very wide range of different scores assigned for each tool across the respondents, with some assigning a poor score while others assign and excellent score to the same tool, showing that different people experience the tools differently!
Both the average and the range of scores assigned are presented below.
(Links to online documentation and support resources for these tools are provided here)
It should be noted that for the local tools (WRSM, SPATSIM, ACRU), users were generally trained in its use by those developing the tool or those closely involved with it and are often based at institutions with experts in that tool. Support for the local tools usually takes the form of personal interaction. For the international tools (SWAT, MIKE-SHE), some may have taught themselves to use the tool with online resources and/or been trained by an expert based elsewhere. As a result, there likely is more direct engagement with, and reliance on, the written documentation and support provided through global online help forums and emails to more distant expert users when working with these tools.
Aspect of tool | WRSM-Pitman | SPATSIM-Pitman | ACRU4 | SWAT2012 | MIKE-SHE | |
---|---|---|---|---|---|---|
Number of users answering survey | 13 | 14 | 19 | 9 | 8 | |
Interface 1-poor to 5-excellent |
Average score | 3.8 | 3.4 | 3.4 | 3.9 | 3.0 |
Range of scores | 2 - 5 | 2 - 5 | 1 - 5 | 3 - 5 | 1 - 5 | |
Documentation 1-poor to 5-excellent |
Average score | 3.7 | 3.6 | 3.6 | 3.6 | 2.3 |
Range of scores | 2 - 5 | 2 - 5 | 1 - 5 | 3 - 5 | 1 - 5 | |
Support 1-poor to 5-excellent |
Average score | 3.7 | 3.7 | 3.9 | 3.8 | 2.4 |
Range of scores | 1 - 5 | 2 - 5 | 1 - 5 | 2 - 5 | 1 - 5 |