Ice Sheet Evolution Experiments
Now that we've (1) convinced ourselves that the higher-order model is "working" (at least as well as other higher-order models out there) and (2) added some code so that we can evolve the ice sheet geometry, we can do some simple experiments looking at the combination of the two. By "simple", I mean coarse resolution, idealized domains with idealized boundary conditions. These may actually be somewhat "non-sensicle" with respect to real ice sheets (and in that sense, the title of this page is somewhat misleading), but they are still useful exercises for testing and comparing algorithms and pointing out some other important issues.
The two experiments we will attempt are:
- Allow a parabolic-shaped mound of ice to spread out under it's own weight. The non-sensicle part is that we will assume (i) zero surface mass balance (no accumulation or ablation) and (ii) a fixed lateral boundary. That is, the ice sheet margin won't be allowed to move past it's original position (think of an ice sheet surrounded by a very tall (and very strong) fence - the ice sheet will eventually take on the shape of a disk with uniform thickness).
- Evolve a confined ice shelf of uniform thickness and temperature. The non-sensicle part here is that again, we assume no accumulation or ablation. Thus, the only thing that can happen is that ice flows out of the shelf front. Eventually, the shelf should thin to zero thickness (but how and when that happens will vary depending on a number of things).
Note that we will be using the exact same dynamics solver to obtain solutions for these very different end-member flow regimes; flow dominated by internal deformation with no sliding, and flow dominated by horizontal stretching with no internal deformation. Arguably, this is a significant advance over having to stitch together different models with different physics in different parts of the ice sheet. Transitions between the two flow regimes take place solely as a function of where, and how much, basal sliding takes place.
Experiment 1: evolve the blob
Change your current directory to tests/ho-other/ (i.e. higher-order other). There will be a model configuration file there called blob.config and a related python script blob.py (both courtesy of Tim Bocek). As in previous experiments, we first need to build the netCDF input files
python blob.py blob.config
Then, we can run the model with the input files with
echo blog.config | ./simple_glide
if simple_glide is in your directory (if simple_glide is in your path then the "./" out front is not needed). With the default values in blob.config, the code will evolve a simple parabolic shaped ice sheet for 5000 yrs at a time step of 50 yrs using the first-order upwinding scheme we just built. This option is set in the configuration file
[options] flow_law = 2 evolution = 4 temperature = 1 ... etc ...
As the model runs, you'll see ~100 packets of iterations fly by on the screen, but at the low resolution here this should not take too long. The netCDF file blob.out.nc will record the output and you can use NCVIEW and the "movie" controls at the top to view the time series of any variable while the model is running or at the end of the run. Once the run has finished, rename your output file
mv blob.out.nc blob.out.fo.nc
where the "fo" stands for "first-order advection scheme". Now edit the [options] section of blob.config and change evolution = 4 to evolution = 3 . This will tell the code to use a more sophisticated advection scheme - incremental remapping - to evolve the thickness. Repeat the steps above executing the python scripts and simple_glide. When you've got another netCDF output file at the end of the run rename it
mv blob.out.nc blob.out.remap.nc
Because NCVIEW is not necessarily ideal for comparing the output from two similar files w/ slightly different input parameters, we'll use a simple matlab script to compare the output at the same time slices but using the two different evolution schemes. Make sure to change your matlab directory to your tests/blob/ directory before running the script (or copy the relevant netCDF files into your matlab directory).
- How does the output for the two evolution schemes compare? Are the elevation profiles produced from the different schemes at the same time slices significantly different? (the answer actually surprised me)
- Use NCVIEW and one of the netCDF files to look at the initial ice sheet profile (click on thk variable to get a map view, then click at the ice sheet center to get a profile, and choose X1 or Y1 from the lower-left corner under X-axis). Notice that it is very "rounded". Now advance the time step forward to ~500 years (under Current near the bottom of the NCVIEW panel, or using the movie controls at the top) and get another profile. How has the profile changed, particularly near the ice divide (the center)? You should notice that it is now very "pointed", rather than rounded. Why is this? Is it "real" or is there something wrong with the model? The figures below show this in more detail, along with a similar set of profiles for an ice sheet with a Newtonian (n=1) viscosity. Click HERE for the answer.
Figure 1: Close up of the initial elevation profile near the ice divide (red) and after some years of evolution (white, blue). The left and middle panels are for ice with a power law exponent of 3 (middle panel has 2x the grid resolution of the left panel - the feature is evidently not related to grid resolution). The right panel is for a fluid with a flow law exponent of 1 (i.e. a Newtonian fluid).
Experiment 2: evolve the shelf
Change your current directory to tests/shelf/. There will be a model configuration file there called confined-shelf-ss.config and a related python script confined-shelf-ss.py (again, both courtesy of Tim Bocek). The default setup for this configuration file will give you a diagnostic (i.e. instantaneous) solution for a setup that is very close to that for the unpublished ESIMINT ice shelf experiments 3 and 4 , which you explored a bit earlier in one of the COMSOL exercises. To get that solution build the necessary netCDF input files,
python confined-shelf-ss.py confined-shelf-ss.config
and then run the model
echo confined-shelf-ss.config | ./simple_glide
(again, if simple_glide is in your current directory, use the above or, if it is in your path already, omit the "./"). Examine the converged solution using NCVIEW. It should look very similar to the first set of figures shown here. Note that, for now, this is a very coarse resolution solution both in the vertical and horizontal (it will run much faster that way, and the output under grid refinement is more-or-less the same).
Now we'll march this initial condition forward in time by 250 yrs using our 1st-order advection scheme. Edit confined-shelf-ss.config so that the items in bold below match those in your file
[time] niso = 1. tend = 200250. ntem = 1. tstart = 200000. dt = 10. nvel = 1.
[options] flow_law = 2 evolution = 4 temperature = 1 ... diagnostic_run = 0 basal_water = 0 ...
Rebuild the input netCDF files and run the model as before. You can watch the shelf evolution in "real time" (as the fields get saved to the netCDF file) by plotting the thickness field with NCVIEW and using the movie controls at the top. Once the model has stopped running and you have a netCDF file for the whole run, use NCVIEW to make a map of thk (thickness field), then click somewhere near the shelf front (bottom of the figure). You should get a figure that looks something like that in Figure 1 below, which is the history of the thickness over time at that point. You can see what looks something like an exponential decay, suggesting that, even though there is no accumulation, and even though the shelf thinned very rapidly at first, it will take some time for the shelf to thin to ~0 under these conditions.
Figure 2: History of thickness over time near the ice shelf front.
Let's do the same exercise but change some of the input parameters. Specifically, let's make the entire shelf warmer and softer. Edit the confined-shelf-ss.py script and change the lines
#Set the flow law for this experiment in the configuration file and write the file back out parser.set("parameters", "default_flwa", "5.7e-18")
#Set the flow law for this experiment in the configuration file and write the file back out parser.set("parameters", "default_flwa", "5.7e-17")
We've just made the ice an order of magnitude softer, so it should flow faster, and our shelf thickness should decay much faster. Since we've changed some input parameters, don't forget to rebuild the netCDF input files before re-running the model. Ok, so now we've re-run the 250 years of evolution with softer, warmer ice. What happened? Some funny warnings flashed by that might worry us a bit. When we plot the thickness history over time again, we see something like Figure 2 below.
Figure 3: History of thickness over time near the ice shelf front (softer shelf case).
Can that possibly be right? We lost the whole front of the shelf in just 10 years? Is it Larsen B all over again, or did we forget something? Usually when your model gives you very exciting results that you did not necessarily expect you should suspect the latter. In this case, we made the ice an order of magnitude softer and the ice went about an order of magnitude faster (it won't always be one to one like that), but we didn't change our grid spacing or our time step, and so blew the CFL condition in a big way. Interestingly, the model still ran to completion, telling us something else useful (or scary) - the model won't always crash if we do something dumb, so hopefully we are more careful and/or have other ways of checking out our solutions.
If we go back and reduce the time step in confined-shelf-ss.config by an order of magnitude as well
[time] ... dt = 1. ...
and run the experiment again, we get the results shown in Figure 4 - more drastic than in Figure 2 but much more sensible than Figure 3 (NOTE that you may not want to actually re-run this experiment yourself, unless you have extra time. With the smaller time step, it will take much longer to run out to 250 years).
Figure 4: History of thickness over time near the ice shelf front (softer shelf case w/o CFL violation).
It would be nice to avoid this kind of thing happening again and we could fairly easily add some lines of code to our upwind scheme to do a CFL check. Adding something like
! conservative CFL check if( ( maxval( abs(ubar) )*dt > 0.5d0*dew ) .or. ( maxval( abs(vbar) )*dt > 0.5d0*dns ) )then print *,' ' print *,'!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ' print *,'! Advective CFL violation in 1st-order upwind mass advection scheme !' print *,'!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ' stop ! probably the incorrect way to exit - should use calls to 'glide_stop' instead? end if
just after the calculation of the variables ubar and vbar in fo_upwind_advect.F90 would be a good start. You can add this, rebuild the model with make, then try the experiment above with the 10 year time step again to see if you catch the CFL error.
- For one of the evolved netCDF output files we produced above, go back and use NCVIEW to examine the ice thickness field for the last time step (250 yrs). On the plotting options panel, next to the buttons for Axes and Range, change Bi-in to Repl. Now you are looking at the field with no interpolation. Do you notice anything funny along the domain edges? Why is the thickness here still 500 meters when it has thinned so drastically elsewhere?
- Consider the computational burden that results from the CFL condition. If we're working with a 3d model and we want to crank up the horizontal resolution by a factor of 2, we will need to halve the time step as well. For the same problem, we've increased our number of grid cells by a factor of 4 (a factor of 2 for each horizontal dimension) but we've also increased the number of time steps (for the same run time) by a factor of 2. Now consider cranking up your resolution by a factor of 10. That extra resolution really starts to cost (number of horiz. grid cells increases by 100x, and the number of time steps by 10x). It becomes obvious why efficient solvers and massively parallel architectures may be necessary before high-resolution model runs at the whole-ice sheet scale are practical.