Verification¶
Two types of errors may be distinguished: modeling errors and numerical errors. Modeling errors arise from not solving the right equations. Numerical errors result from not solving the equations right. The assessment of modeling errors is validation, whereas the assessment of numerical errors is called verification… Validation makes sense only after verification, otherwise agreement between measured and computed results may well be fortuitous.
P. Wesseling, (2001) Principles of Computational Fluid Dynamics, pp. 560–561 [130]
Verification is the essentially mathematical task of checking that the predictions of the numerical code are close to the predictions of a continuum model, the one which the numerical code claims to approximate. It is a crucial task for a code as complicated as PISM. In verification there is no comparison between model output and observations of nature. Instead, one compares exact solutions of the continuum model, in circumstances in which they are available, to their numerical approximations.
Reference [131] gives a broad discussion of verification and validation in computational fluid dynamics. See [58] and [16] for discussion of verification issues for the isothermal and thermomechanically coupled shallow ice approximation (SIA), respectively, and for exact solutions to these models, and [10], [21] for verification using an exact solution to the SSA equations for ice streams.
In PISM there is a separate executable pismv
which is used for SIA-related
verification, and there are additional scripts for SSA-related verification. The source
codes which are verified by pismv
are, however, exactly the same source files as are
run by the normal PISM executable pismr
. In technical terms, pismv
runs a derived
class of the PISM base class.
Test |
Continuum model tested |
Reference |
Comments |
---|---|---|---|
A |
isothermal SIA, steady, flat bed, constant accumulation |
[58] |
|
B |
isothermal SIA, flat bed, zero accumulation |
similarity solution |
|
C |
isothermal SIA, flat bed, growing accumulation |
[58] |
similarity solution |
D |
isothermal SIA, flat bed, oscillating accumulation |
[58] |
uses compensatory accumulation |
E |
isothermal SIA; as A, but with sliding in a sector |
[58] |
uses compensatory accumulation |
F |
thermomechanically coupled SIA (mass and energy conservation), steady, flat bed |
uses compensatory accumulation and heating |
|
G |
thermomechanically coupled SIA; as F but with oscillating accumulation |
ditto |
|
H |
bed deformation coupled with isothermal SIA |
[32] |
joined similarity solution |
I |
stream velocity computation using SSA (plastic till) |
||
J |
shelf velocity computation using SSA |
(source code) |
|
K |
pure conduction in ice and bedrock |
[132] |
|
L |
isothermal SIA, steady, non-flat bed |
(source code) |
numerical ODE solution |
Test |
Example invocation |
---|---|
A |
|
B |
|
C |
|
D |
|
E |
|
F |
|
G |
|
H |
|
I |
|
J |
|
K |
|
L |
|
Option |
Description |
---|---|
|
Choose verification test by single character name; see Table 33. |
|
Do not report errors at the end of a verification run. |
|
Only evaluate the exact solution; no numerical approximation at all. |
|
Save error report to a netCDF file. |
|
Append to a report file. |
Table 33 summarizes the many exact solutions currently available in PISM. Most of these exact solutions are solutions of free boundary problems for partial differential equations; only Tests A, E, J, K are fixed boundary value problems.
Table 34 shows how to run each of them on a coarse grids. Note that tests
I and J require special executables ssa_testi,ssa_testj
which are built with
configuration flag Pism_BUILD_EXTRA_EXECS
equal to ON
. Table 35
gives the special verification-related options of the pismv
executable.
Numerical errors are not, however, the dominant reasons why ice sheet models give imperfect results. The largest sources of errors include those from using the wrong (e.g. over-simplified or incorrectly-parameterized) continuum model, and from observational or pre-processing errors present in input data. Our focus here on numerical errors has a model-maintenance goal. It is easier to maintain code by quantitatively confirming that it produces small errors in cases where those can be measured, rather than “eyeballing” results to see that they are “right” according to human judgment.
The goal of verification is not generally to see that the error is zero at any particular resolution, or even to show that the error is small in a predetermined absolute sense. Rather the goals are
to see that the error is decreasing,
to measure the rate at which it decreases, and
to develop a sense of the magnitude of numerical error before doing realistic ice sheet model runs.
Knowing the error decay rate may give a prediction of how fine a grid is necessary to achieve a desired smallness for the numerical error.
Therefore one must “go down” a grid refinement “path” and measure numerical error for each grid [131]. The refinement path is defined by a sequence of spatial grid cell sizes which decrease toward the refinement limit of zero size [74]. In PISM the timestep \(\dt\) is determined adaptively by a stability criterion (see section Understanding adaptive time-stepping). In PISM one specifies the number of grid points, thus the grid cell sizes because the overall dimensions of the computational box are normally fixed; see section Computational box. By “measuring the error for each grid” we mean computing a norm (or norms) of the difference between the numerical solution and the exact solution.
For a grid refinement path example, in tests of the thermomechanically-coupled SIA model one refines in three dimensions, and these runs produced Figures 13, 14, and 15 of [16]:
pismv -test G -max_dt 10.0 -y 25000 -Mx 61 -My 61 -Mz 61 -z_spacing equal
pismv -test G -max_dt 10.0 -y 25000 -Mx 91 -My 91 -Mz 91 -z_spacing equal
pismv -test G -max_dt 10.0 -y 25000 -Mx 121 -My 121 -Mz 121 -z_spacing equal
pismv -test G -max_dt 10.0 -y 25000 -Mx 181 -My 181 -Mz 181 -z_spacing equal
pismv -test G -max_dt 10.0 -y 25000 -Mx 241 -My 241 -Mz 241 -z_spacing equal
pismv -test G -max_dt 10.0 -y 25000 -Mx 361 -My 361 -Mz 361 -z_spacing equal
The last two runs require a supercomputer! In fact the \(361\times 361\times 361\) run
involves more than \(100\) million unknowns, updated at each of millions of time
steps. Appropriate use of parallelism (mpiexec -n NN pismv
) and of the -skip
modification to adaptive timestepping accelerates such fine-grid runs; see section
Understanding adaptive time-stepping.
Figures Fig. 35 through Fig. 39 in
Sample convergence plots show a sampling of the results of verifying PISM using the
tests described above. These figures were produced automatically using Python scripts
test/vfnow.py
and test/vnreport.py
. See section Utility and test scripts.
These figures do not show outstanding rates of convergence, relative to textbook partial differential equation examples. For the errors in tests B and G, see the discussion of free margin shape in [58]. For the errors in test I, the exact continuum solution is not very smooth at the free boundary [21].
Previous | Up | Next |