Blatter’s model¶
Unlike the rest of PISM, the Blatter solver uses a geometry-following vertical grid (see
Fig. 19) to approximate horizontal components of ice velocity.
The number of vertical “levels” in this grid is controlled by
stress_balance
.blatter
.Mz
.
The non-linear system resulting from the discretization of PDEs corresponding to Blatter’s stress balance model is much harder to solve than the one corresponding to the SSA system ([56], [72]) and (at this point) experimentation with preconditioner choices seems inevitable. We use PETSc’s command-line options to control these choices.
Note
The Blatter solver uses the -bp_
command-line option prefix.
Run PISM like this
pism -stress_balance blatter \
[other options] -help | grep "-bp_"
to see the complete list of PETSc option controlling this solver.
The multigrid (MG) preconditioner using semi-coarsening in the vertical direction followed by further (horizontal) coarsening using algebraic multigrid methods appears to be effective [72]. The option combination
-bp_pc_type mg \
-bp_pc_mg_levels N \
-bp_mg_levels_ksp_type gmres \
-bp_mg_coarse_pc_type gamg
roughly corresponds to this approach (see Practical preconditioners choices for more).
Unlike [72], who used a purely algebraic approach, these options select a combination of geometric and algebraic multigrid preconditioners.
To use a multigrid preconditioner the user has to specify
the number of MG levels
using-bp_pc_mg_levels N
,the coarsening factor
by settingstress_balance
.blatter
.coarsening_factor
, andthe vertical grid size
(stress_balance
.blatter
.Mz
).
The values of these parameters have to be compatible. Specifically,
for some positive integer
Note
PISM stops with an error message if (7) is not satisfied.
To set up a multigrid preconditioner PISM needs to build a hierarchy of vertical grids[1] with
Overall, the number of points
This process explains the compatibility condition (7): the
number of spaces in all vertical grids in the hierarchy except for the coarsest one
has to be divisible by
Coarsening factor |
Possible sizes of vertical grids in a hierarchy |
---|---|
2, 3, 5, 9, 17, 33, 65, 129, 257, 513, 1025, |
|
2, 4, 10, 28, 82, 244, 730, |
|
2, 5, 17, 65, 257, 1025, |
|
2, 6, 26, 126, 626, 3126, |
|
2, 7, 37, 217, 1297, |
|
2, 8, 50, 344, 2402, |
|
2, 9, 65, 513, 4097, |
By default
For example, we can set up a solver using
The coarsest grid in a hierarchy should be as small as possible (corresponding to
Surface gradient computation¶
Some synthetic geometry experiments with grounded margins show “checkerboard” artifacts in computed ice velocity near steep margins. A similar issue and an attempt to address it are described in [73].
This implementation takes a different approach: instead of using an “upwinded” finite
difference approximation of the surface gradient we allow using the stress_balance
.blatter
.use_eta_transform
to enable it.
Adaptive time stepping¶
PISM’s explicit in time mass continuity code is conditionally stable. When used with the SSA + SIA hybrid, the maximum allowed time step is computed using a combination of the CFL criterion [74] and the maximum diffusivity of the SIA flow [10]. This time step restriction does not disappear when the same mass continuity code is used with a stress balance model that does not explicitly compute “advective” and “diffusive” parts of the flow. We need a work-around.
Note
Very little is known about stability of explicit time stepping methods of the mass continuity equation coupled to a “generic” stress balance model.
We don’t have a rigorous justification for the approach described below.
When this BP solver is coupled to PISM, the vertically-averaged ice velocity is used in place of the “advective” (“sliding”) velocity from the SSA. As a result, the CFL-based time step restriction is applied by existing PISM code.
However, it is almost always the case that the diffusivity-driven time step restriction is more severe and so we need a replacement: CFL alone does not appear to be sufficient for stability.
We compute an estimate of the “SIA-like” maximum diffusivity by observing that for the SIA
the vertically-averaged ice flux
We solve this for the diffusivity
and use the maximum of this quantity to determine the maximum allowed time step using (5).
Note
Other models supporting this stress balance model and using an explicit in time geometry evolution method ([73], [75]) report that the CFL condition appears to be sufficient in practice.
Given the lack of a theory describing the maximum time step necessary for stability it
may make sense to experiment with increasing time_stepping
.adaptive_ratio
.
Setting it to a very large value would completely disable the diffusivity-based time step restriction.
Note
The “time step skipping mechanism” enabled using time_stepping
.skip
.enabled
(see Understanding adaptive time-stepping) has a different effect when the Blatter stress balance model is
used: the full 3D ice velocity is updated during every sub-step and only the energy
balance and age models takes the “long” time step.
Since the Blatter solver is likely to dominate the computational cost, setting
time_stepping
.skip
.enabled
to “true” is not likely to be beneficial.
Practical preconditioners choices¶
The option combination
-bp_pc_type mg \
-bp_pc_mg_levels N \
-bp_mg_levels_ksp_type gmres \
-bp_mg_coarse_pc_type gamg
sets up the kind is a multigrid preconditioner known to be effective, but it is not the only one, and most likely not the best one.
Our experiments suggest that
-bp_pc_type mg \
-bp_pc_mg_levels N \
-bp_snes_ksp_ew \
-bp_snes_ksp_ew_version 3 \
-bp_mg_levels_ksp_type richardson \
-bp_mg_levels_pc_type sor \
-bp_mg_coarse_ksp_type gmres \
-bp_mg_coarse_pc_type hypre \
-bp_mg_coarse_pc_hypre_type boomeramg
may work better[2], but requires PETSc built with hypre.
Here -bp_snes_ksp_ew -bp_snes_ksp_ew_version 3
enables Luis Chacón’s variant of the
Eisenstat-Walker [76] method of adjusting linear solver tolerances to
avoid oversolving and -bp_mg_coarse_pc_type hypre -bp_mg_coarse_pc_hypre_type
boomeramg
selects the BoomerAMG algebraic MG preconditioner from hypre for the coarse
MG level.
Note
The Eisenstat-Walker adjustment of linear solver tolerances saves time when a
low-accuracy estimate of the Newton step is sufficient but may lead to solver failures,
especially when the initial guess is of poor quality. In an attempt to reduce
computational costs while maintaining robustness PISM disables -bp_snes_ksp_ew
if
the initial guess is zero (beginning of a simulation) or if the solver fails with
-bp_snes_ksp_ew
.
Some simulations may benefit from using a direct solver on the coarse MG level. For example, the following would use MUMPS on the coarse grid:
-bp_pc_type mg \
-bp_pc_mg_levels N \
-bp_snes_ksp_ew \
-bp_snes_ksp_ew_version 3 \
-bp_mg_levels_ksp_type richardson \
-bp_mg_levels_pc_type sor \
-bp_mg_coarse_ksp_type preonly \
-bp_mg_coarse_pc_type lu
if PETSc is built with MUMPS.
Note
Parallel direct solvers such as MUMPS really benefit from using optimized BLAS and LAPACK libraries.
Please see section 3.5.3 of [1] for instructions. At the time of writing
--download-f2cblaslapack --download-blis
is recommended as a portable high-performance option. However, it makes sense to try other freely-available libraries (Intel MKL, OpenBLAS) as well.
Note, though, that the multigrid preconditioner, even if it is effective in terms of reducing the number of Krylov iterations, may not be the cheapest one [77]: there is a trade off between the number of iterations and the cost of a single iteration. Other preconditioner options may be worth considering as well.
In some cases node ordering and the way the domain is split among processes in a parallel
run may affect solver performance (see [56], [77],
[72]). These references mention staggering the unknowns so that
In addition to this, [77] mention that parallel domain distribution
partitioning ice columns among multiple processes sometimes leads to convergence issues.
Following this advice, PISM does not partition the domain in the
Run PISM as follows to give this a try:
mpiexec -n M pism -Nx 1 -Ny M ...
This forces PISM to split the domain into
Please see Blatter stress balance solver: technical details for more.
Parameters¶
Below is the complete list of configuration parameters controlling this solver (prefix:
stress_balance.blatter.
):
Glen_exponent
(3) Glen exponent in ice flow law for the Blatter stress balance solverMz
(5) Number of vertical grid levels in the icecoarsening_factor
(2) Coarsening factor in the directionenhancement_factor
(1) Flow enhancement factor for the Blatter stress balance flow lawflow_law
(gpbld
) The flow law used by the Blatter-Pattyn stress balance modeluse_eta_transform
(no) Use the transform to improve the accuracy of the surface gradient approximation near grounded margins (see [58] for details).
Footnotes
Previous | Up | Next |