Blatter’s model¶
Unlike the rest of PISM, the Blatter solver uses a geometry-following vertical grid (see
Fig. 19) to approximate horizontal components of ice velocity.
The number of vertical “levels” in this grid is controlled by
stress_balance
.blatter
.Mz
.
The non-linear system resulting from the discretization of PDEs corresponding to Blatter’s stress balance model is much harder to solve than the one corresponding to the SSA system ([56], [72]) and (at this point) experimentation with preconditioner choices seems inevitable. We use PETSc’s command-line options to control these choices.
Note
The Blatter solver uses the -bp_
command-line option prefix.
Run PISM like this
pismr -stress_balance blatter \
[other options] -help | grep "-bp_"
to see the complete list of PETSc option controlling this solver.
The multigrid (MG) preconditioner using semi-coarsening in the vertical direction followed by further (horizontal) coarsening using algebraic multigrid methods appears to be effective [72]. The option combination
-bp_pc_type mg \
-bp_pc_mg_levels N \
-bp_mg_levels_ksp_type gmres \
-bp_mg_coarse_pc_type gamg
roughly corresponds to this approach (see Practical preconditioners choices for more).
Unlike [72], who used a purely algebraic approach, these options select a combination of geometric and algebraic multigrid preconditioners.
To use a multigrid preconditioner the user has to specify
the number of MG levels \(N\) using
-bp_pc_mg_levels N
,the coarsening factor \(C\) by setting
stress_balance
.blatter
.coarsening_factor
, andthe vertical grid size \(M_z\) (
stress_balance
.blatter
.Mz
).
The values of these parameters have to be compatible. Specifically, \(M_z\) has to have the form
for some positive integer \(A\).
Note
PISM stops with an error message if (6) is not satisfied.
To set up a multigrid preconditioner PISM needs to build a hierarchy of vertical grids1 with \(M_z\) points on the finest grid.. Starting with this grid, PISM creates the next one by dividing the number of vertical spaces by the coarsening factor \(C\). Then the newly-created grid is coarsened and this process is continued, stopping when the desired number \(N\) of grids (MG levels) reached.
Overall, the number of points \(M_{z}^k\) in the vertical grid number \(k\) in the hierarchy is
This process explains the compatibility condition (6): the number of spaces in all vertical grids in the hierarchy except for the coarsest one has to be divisible by \(C\).
Coarsening factor \(C\) |
Possible sizes of vertical grids in a hierarchy |
---|---|
\(2\) |
2, 3, 5, 9, 17, 33, 65, 129, 257, 513, 1025, \(\dots\) |
\(3\) |
2, 4, 10, 28, 82, 244, 730, \(\dots\) |
\(4\) |
2, 5, 17, 65, 257, 1025, \(\dots\) |
\(5\) |
2, 6, 26, 126, 626, 3126, \(\dots\) |
\(6\) |
2, 7, 37, 217, 1297, \(\dots\) |
\(7\) |
2, 8, 50, 344, 2402, \(\dots\) |
\(8\) |
2, 9, 65, 513, 4097, \(\dots\) |
By default \(C = 2\), but aggressive coarsening (i.e. larger values of \(C\), up to around \(8\)) has been observed to work. As highlighted in Table 12, sometimes the same number of vertical grid levels can be achieved using more than one combination of the coarsening factor and the number of MG levels.
For example, we can set up a solver using \(65\) vertical levels and \(3\) MG levels with the coarsening factor of \(8\), or \(4\) MG levels and the factor of \(4\), or \(7\) MG levels and the coarsening factor of \(2\). In general, the computational cost of an MG preconditioner application increases with the number of MG levels, so the first hierarchy (\(2, 9, 65\), \(C=8\)) may be the best choice. However, coarsening that is too aggressive may make a less effective preconditioner, requiring more Krylov iterations and increasing the computational cost. Again, one may have to experiment to find settings that work best in a particular setup.
The coarsest grid in a hierarchy should be as small as possible (corresponding to \(A = 1\) in (6)); two levels is the minimum achievable in the context of the finite element method used to discretize the system (this corresponds to a mesh that is just one element thick).
Surface gradient computation¶
Some synthetic geometry experiments with grounded margins show “checkerboard” artifacts in computed ice velocity near steep margins. A similar issue and an attempt to address it are described in [73].
This implementation takes a different approach: instead of using an “upwinded” finite
difference approximation of the surface gradient we allow using the \(\eta\) transformation
described in Surface gradient method. Set
stress_balance
.blatter
.use_eta_transform
to enable it.
Adaptive time stepping¶
PISM’s explicit in time mass continuity code is conditionally stable. When used with the SSA + SIA hybrid, the maximum allowed time step is computed using a combination of the CFL criterion [74] and the maximum diffusivity of the SIA flow [10]. This time step restriction does not disappear when the same mass continuity code is used with a stress balance model that does not explicitly compute “advective” and “diffusive” parts of the flow. We need a work-around.
Note
Very little is known about stability of explicit time stepping methods of the mass continuity equation coupled to a “generic” stress balance model.
We don’t have a rigorous justification for the approach described below.
When this BP solver is coupled to PISM, the vertically-averaged ice velocity is used in place of the “advective” (“sliding”) velocity from the SSA. As a result, the CFL-based time step restriction is applied by existing PISM code.
However, it is almost always the case that the diffusivity-driven time step restriction is more severe and so we need a replacement: CFL alone does not appear to be sufficient for stability.
We compute an estimate of the “SIA-like” maximum diffusivity by observing that for the SIA the vertically-averaged ice flux \(Q\) satisfies
We solve this for the diffusivity \(D\):
and use the maximum of this quantity to determine the maximum allowed time step using (4).
Note
Other models supporting this stress balance model and using an explicit in time geometry evolution method ([73], [75]) report that the CFL condition appears to be sufficient in practice.
Given the lack of a theory describing the maximum time step necessary for stability it
may make sense to experiment with increasing time_stepping
.adaptive_ratio
.
Setting it to a very large value would completely disable the diffusivity-based time step restriction.
Note
The “time step skipping mechanism” enabled using time_stepping
.skip
.enabled
(see Understanding adaptive time-stepping) has a different effect when the Blatter stress balance model is
used: the full 3D ice velocity is updated during every sub-step and only the energy
balance and age models takes the “long” time step.
Since the Blatter solver is likely to dominate the computational cost, setting
time_stepping
.skip
.enabled
to “true” is not likely to be beneficial.
Practical preconditioners choices¶
The option combination
-bp_pc_type mg \
-bp_pc_mg_levels N \
-bp_mg_levels_ksp_type gmres \
-bp_mg_coarse_pc_type gamg
sets up the kind is a multigrid preconditioner known to be effective, but it is not the only one, and most likely not the best one.
Our experiments suggest that
-bp_pc_type mg \
-bp_pc_mg_levels N \
-bp_snes_ksp_ew \
-bp_snes_ksp_ew_version 3 \
-bp_mg_levels_ksp_type richardson \
-bp_mg_levels_pc_type sor \
-bp_mg_coarse_ksp_type gmres \
-bp_mg_coarse_pc_type hypre \
-bp_mg_coarse_pc_hypre_type boomeramg
may work better2, but requires PETSc built with hypre.
Here -bp_snes_ksp_ew -bp_snes_ksp_ew_version 3
enables Luis Chacón’s variant of the
Eisenstat-Walker [76] method of adjusting linear solver tolerances to
avoid oversolving and -bp_mg_coarse_pc_type hypre -bp_mg_coarse_pc_hypre_type
boomeramg
selects the BoomerAMG algebraic MG preconditioner from hypre for the coarse
MG level.
Note
The Eisenstat-Walker adjustment of linear solver tolerances saves time when a
low-accuracy estimate of the Newton step is sufficient but may lead to solver failures,
especially when the initial guess is of poor quality. In an attempt to reduce
computational costs while maintaining robustness PISM disables -bp_snes_ksp_ew
if
the initial guess is zero (beginning of a simulation) or if the solver fails with
-bp_snes_ksp_ew
.
Some simulations may benefit from using a direct solver on the coarse MG level. For example, the following would use MUMPS on the coarse grid:
-bp_pc_type mg \
-bp_pc_mg_levels N \
-bp_snes_ksp_ew \
-bp_snes_ksp_ew_version 3 \
-bp_mg_levels_ksp_type richardson \
-bp_mg_levels_pc_type sor \
-bp_mg_coarse_ksp_type preonly \
-bp_mg_coarse_pc_type lu
if PETSc is built with MUMPS.
Note
Parallel direct solvers such as MUMPS really benefit from using optimized BLAS and LAPACK libraries.
Please see section 3.5.3 of [1] for instructions. At the time of writing
--download-f2cblaslapack --download-blis
is recommended as a portable high-performance option. However, it makes sense to try other freely-available libraries (Intel MKL, OpenBLAS) as well.
Note, though, that the multigrid preconditioner, even if it is effective in terms of reducing the number of Krylov iterations, may not be the cheapest one [77]: there is a trade off between the number of iterations and the cost of a single iteration. Other preconditioner options may be worth considering as well.
In some cases node ordering and the way the domain is split among processes in a parallel run may affect solver performance (see [56], [77], [72]). These references mention staggering the unknowns so that \(u\) and \(v\) components at the same node correspond to adjacent equations in the system and using contiguous ordering of unknowns in the same ice column. This allows the solver to capture vertical coupling locally using incomplete factorization.
In addition to this, [77] mention that parallel domain distribution partitioning ice columns among multiple processes sometimes leads to convergence issues. Following this advice, PISM does not partition the domain in the \(z\) direction, but some of our experiments show that if the solver struggles, switching to a one-dimensional domain decomposition along the \(y\) direction may help (see Parallel domain distribution).
Run PISM as follows to give this a try:
mpiexec -n M pismr -Nx 1 -Ny M ...
This forces PISM to split the domain into \(M\) parts in the \(y\) direction instead of the default (approximately \(\sqrt{M}\) in both \(x\) and \(y\)).
Please see Blatter stress balance solver: technical details for more.
Parameters¶
Below is the complete list of configuration parameters controlling this solver (prefix:
stress_balance.blatter.
):
Glen_exponent
(3) Glen exponent in ice flow law for the Blatter stress balance solverMz
(5) Number of vertical grid levels in the icecoarsening_factor
(2) Coarsening factor in the \(z\) directionenhancement_factor
(1) Flow enhancement factor for the Blatter stress balance flow lawflow_law
(gpbld
) The flow law used by the Blatter-Pattyn stress balance modeluse_eta_transform
(no) Use the \(\eta\) transform to improve the accuracy of the surface gradient approximation near grounded margins (see [58] for details).
Footnotes
Previous | Up | Next |