Coupling the Cryosphere to other Earth systems, part II
Back to Summer Modeling School
Ice sheets in the Community Climate System Model
William Lipscomb, Los Alamos National Laboratory
A brief introduction to CCSM
The Community Climate System Model (CCSM; http://www.ccsm.ucar.edu/) is one of three U.S. global climate models (GCMs) featured prominently in the assessment reports of the Intergovernmental Panel on Climate Change (IPCC). The others are the NASA GISS model and the NOAA GFDL model. (GISS is the Goddard Institute for Space Studies in New York City, and GFDL is the Geophysical Fluid Dynamics Laboratory in Princeton, N.J.) The GISS and GFDL models have been developed primarily at those institutions, but CCSM, as the name suggests, is a broad community effort. Although model development is centered at the National Center for Atmospheric Research (NCAR) in Boulder, there have been substantial contributions from scientists at several national laboratories and numerous universities, with support from the Department of Energy (DOE) and the National Science Foundation (NSF).
CCSM has a hub-and-spoke design. Recent model versions have had four physical components—atmosphere, land, ocean, and sea ice—that communicate through a coupler. The current CCSM components are the Community Atmosphere Model (CAM), the Community Land Model (CLM), the Parallel Ocean Program (POP), and the Community Ice Code (CICE). POP and CICE were developed primarily by scientists in the Climate, Ocean and Sea Ice Modeling (COSIM) group at Los Alamos National Laboratory (LANL), where I work.
I have recently added the Glimmer ice sheet model as a fifth physical component, but it is not yet part of the officially released code. I’ll say more about CCSM and ice sheets below.
(An historical aside: Why are ocean and ice models developed at a nuclear weapons lab in the high desert of New Mexico? The short answer is that many of the computational methods and hardware used in weapons simulations are useful for climate modeling. COSIM was founded when the Cold War was winding down and a LANL scientist named Bob Malone, who had been studying nuclear winter, decided to develop a parallel ocean model.)
In principle, each physical component lives on its own grid, though in practice the atmosphere and land components usually share one horizontal grid, and the ocean and sea ice components share another. CCSM is always run in parallel, on anywhere from ~10 to ~10,000 processors. The components can be run either concurrently (all at the same time, but on different sets of processors) or sequentially (one after the other, with each component using all the available processors).
Each model component sends to and receives from the coupler a number of 2D fields located at the component interfaces. These fields include upwelling and downwelling shortwave and longwave radiation, air temperature, specific humidity, pressure, wind speed, ocean velocity, sea surface temperature and salinity, sea ice concentration, surface albedo, etc. The coupler can map fields from one component domain to another (e.g., from the atmosphere grid to the ocean grid) as well as merge fields from more than one component (e.g., the area-weighted albedos from the ocean and sea ice models, which are combined into a single field for the benefit of the atmosphere). Also, the coupler may be responsible for deriving fluxes (e.g., sensible and latent heat fluxes) from other fields.
CCSM is managed by a Scientific Steering Committee. There are twelve working groups that focus on different aspects of model development and application. The newest group is the Land Ice Working Group (LIWG), which is responsible for developing the CCSM ice sheet component and for using the model to predict sea-level rise caused by the loss of land ice. See here for details: http://www.ccsm.ucar.edu/working_groups/Land+Ice/
The CCSM community meets once a year, usually in June in Breckenridge, Colorado. In addition, each working group holds a winter meeting, usually in Boulder. You are cordially invited to attend the next meeting of the LIWG, which will be held in conjunction with the CCSM Polar Climate Working Group and with the SeaRISE sea-level assessment group. Contact one of the LIWG co-chairs, Jesse Johnson or Bill Lipscomb, for details.
CCSM, the IPCC, and sea-level rise
Development of CCSM and other GCMs is driven largely by the IPCC timetable. The fourth assessment report, AR4, was released in 2007, and the next report, AR5, is scheduled for 2013. The final form of CCSM version 4, which will be used for AR5 simulations, was determined just a few weeks ago. The control climate simulations are under way, and climate change runs will begin shortly. Most of these runs will be completed by sometime next year. Scientists then have a year or so to analyze and publish results in time to be considered for AR5.
The IPCC schedule is not always conducive to long-term model development. Also, there are concerns that the IPCC reports are too focused on obtaining consensus as opposed to exploring uncertainties. (See, e.g., Oppenheimer et al. 2007.) As a result, the reports may downplay the risks of potentially large and abrupt climate changes such as megadroughts, methane clathrate release, and sea-level rise. But at least for now, these assessments are the primary mechanism for communicating results to policymakers and the public.
Global sea level is rising at a rate of ~2.5 to 3 mm/yr (i.e., 25 to 30 cm/century), with significant contributions from ocean thermal expansion as well as melting of mountain glaciers and ice sheets. Recent observations have established that the Greenland and West Antarctic ice sheets are losing mass at an accelerating rate. IPCC AR4 projected a 21st century sea-level rise of 18 to 59 cm under a broad range of greenhouse emissions scenarios. Notoriously, these projections specifically excluded the possibility of “rapid dynamical changes in ice flow” because “understanding of these effects is too limited to assess their likelihood or provide a best estimate or upper bound for sea level rise.”
Since the release of AR4, there has been considerable pressure on the climate modeling centers and national funding agencies to do a better job at predicting ice-sheet retreat and sea-level rise. Until recently, most GCMs did not have dynamic ice sheets, because it was assumed that ice sheets would not contribute significantly to climate change or sea-level rise on time scales of decades to centuries. Now that this assumption has come under question, the modeling centers (or modelling centres, if you prefer) are scrambling to add ice sheet models. Both CCSM and the U.K. Hadley Centre model will be using Glimmer, with the close involvement of several of the summer school instructors.
Two major community efforts are under way to assess the future ice-sheet contribution to sea-level rise and try to narrow the range of uncertainty. The European Union is supporting a large multinational effort called Ice2sea (http://www.ice2sea.eu/). Bob Bindschadler of NASA is leading a broad but less formal effort called SeaRISE (Sea-level Response to Ice Sheet Evolution; http://websrv.cs.umt.edu/isis/index.php/SeaRISE_Assessment.)
Ice sheets in CCSM
In 2005 I submitted a proposal to incorporate an ice sheet model in CCSM. After conversations with Tony Payne, Ian Rutt, and others, I decided to work with Glimmer, which had been designed specifically for coupling to climate models. I thought the coupling could be done in a year or so, which turned out to be a serious underestimate of the project complexities (or at least an overestimate of my ability to carry out a complex project). Four years later, there is still some work to do, but we finally have a version of CCSM that is more or less ready for climate simulations with dynamic ice sheets.
Ian Rutt and Magnus Hagdorn described the Glimmer code in detail during the Wednesday lectures. During the past two years, Jesse Johnson, Steve Price, and others have made great strides in developing a Community Ice Sheet Model (CISM) based on Glimmer. These developments—in particular, the implementation of a higher-order momentum balance—are described in Steve’s lecture notes and on the U. Montana ice sheet web site (http://websrv.cs.umt.edu/isis/index.php/Main_Page). The new and improved model, known as Glimmer-CISM, will be incorporated in CCSM this fall. Model development is continuing under the direction of a steering committee that includes Tony Payne, Ian Rutt, and Magnus Hagdorn in the U.K., along with Jesse Johnson, Steve Price, and me in the U.S.
Glimmer has been configured for coupled CCSM simulations with a dynamic Greenland ice sheet. Since there are some added difficulties in coupling a marine-based ice sheet to a GCM, we are not yet able to run coupled simulations with a dynamic Antarctic ice sheet. Ultimately, however, we plan to simulate both Greenland and Antarctica, as well as paleo ice sheets. My focus in the rest of this document will be not on Glimmer-CISM, but on changes made in CCSM to compute the surface mass balanced of ice sheets.
Simulating the surface mass balance of ice sheets
We can think of Glimmer as having two main physical components:
- a surface mass balance (SMB) scheme, which computes accumulation and ablation at the upper ice/snow surface. Ablation is defined as the amount of water that runs off to the ocean. Not all the surface meltwater runs off; some of the melt percolates into the snow and refreezes.
- a dynamic component, which computes ice velocities and the resulting evolution of the ice-sheet geometry and temperature fields.
The dynamic component of Glimmer is called GLIDE. The surface mass balance calculations are part of GLINT, the Glimmer interface. GLINT receives the required fields from a climate model or meteorological data set, accumulates and averages the data over a specified time period, and downscales the data to the finer Glimmer grid. (The land and atmosphere models typically run at a grid resolution of ~100 km, whereas ice sheet models require a grid resolution of ~10 km.) The downscaled data is used to compute the surface mass balance, which is passed to GLIDE.
There are two broad classes of surface mass balance schemes:
- positive-degree-day (PDD) schemes, in which the melting is parameterized as a linear function of the number of degree-days above the freezing temperature. The proportionality factor is empirical and may vary in time and space. This factor is larger for bare ice than for snow, since ice has a lower albedo.
- surface energy-balance (SEB) schemes, in which the melting depends on the sum of the radiative, turbulent, and conductive fluxes reaching the surface. SEB schemes are more physically realistic than PDD schemes, but also are more expensive and complex.
Glimmer has a PDD scheme based on that of Huybrechts et al. (1991) and others. (See the Glimmer documentation for details.) PDD schemes are not ideal for climate change studies, because empirical degree-day factors could change in a warming climate. Comparisons of PDD and energy-balance schemes (e.g., van de Wal 1996; Bougamont et al. 2007) suggest that PDD schemes may be overly sensitive to warming temperatures. In fact, Bougamont et al. found that a PDD scheme generates runoff rates nearly twice as large as those computed by an SEB scheme. If we want a credible climate change simulation for the Greenland ice sheet, we should use an energy-balance scheme.
Glimmer does not currently have an SEB scheme, but might include one in the future. If such a scheme were available, one approach to computing surface melting would be as follows: The incoming shortwave and longwave fluxes, temperature, and humidity would be passed from the CCSM atmosphere to GLINT via the coupler. These fields would be downscaled to the ice sheet grid, using an assumed lapse rate to interpolate temperatures to the appropriate elevations on the ice sheet grid. The surface mass balance would then be computed from the downscaled atmosphere fields combined with a detailed snow model.
This approach is sensible if one is working with meteorological data, e.g. from atmospheric reanalysis data. In CCSM, however, the preferred approach is to compute the surface mass balance for ice sheets in CLM, the CCSM land component, on the coarse-resolution land grid. To improve accuracy on the coarse grid, the mass balance is computed for ~10 elevation classes in each gridcell. The mass balance for each elevation class is accumulated and averaged over a coupling interval (typically ~1 day), then passed to GLINT via the coupler. GLINT accumulates and averages the mass balance over a longer interval (typically 1 year) and downscales it to the ice sheet grid. The ice sheet evolves dynamically, then returns the new ice geometry to CLM via the coupler.
Motivation for a surface mass balance scheme in CLM
There are several advantages to computing the surface mass balance in CLM as opposed to GLINT:
- It is much cheaper to compute the SMB in CLM for ~10 elevation classes than in GLINT/Glimmer. For example, suppose we are running CLM at a resolution of ~50 km and Glimmer at ~5 km. Greenland has dimensions of about 1000 x 2000 km. For CLM we would have 20 x 40 x 10 = 8,000 columns, whereas for GLIMMER we would have 200 x 400 = 80,000 columns. Jeff Ridley of the Hadley Centre has found that running an SMB model on the ice sheet grid is as expensive as the rest of the GCM combined. Ghan and others (add ref) have shown that elevation classes give results comparable to those obtained at much greater expense on a finer grid.
- We take advantage of the fairly sophisticated snow physics parameterization already in CLM instead of implementing a separate scheme for GLIMMER. When the CLM scheme is improved, the improvements are applied to ice sheets automatically.
- The atmosphere model can respond during runtime to ice-sheet surface changes. As shown by Pritchard et al. (2008), runtime albedo feedback from the ice sheet is critical for simulating ice-sheet retreat on paleoclimate time scales. Without this feedback, the atmosphere warms much less, and the retreat is delayed.
- Mass is conserved, in that the rate of surface ice growth or melting computed in CLM is equal to the rate seen by the dynamic ice sheet model.
- The improved surface mass balance is available in CLM for all glaciated grid cells (e.g., in the Alps, Rockies, Andes, and Himalayas), not just those which are part of ice sheets.
Details of the new SMB scheme
As it happens, CLM has a hierarchical data structure that makes it relatively straightforward to model glaciated regions with multiple elevation classes. In the standard version of CLM, each gridcell is partitioned into one or more of five landunit types: vegetated, lake, wetland, urban, and glacier. Each landunit consists of a user-defined number of columns, and each column has its own vertical profile of temperature and water content.
I created a sixth landunit, denoted glacier_mec, where “mec” stands for “multiple elevation classes.” Glacier_mec landunits are similar to glacier landunits, except that each elevation class is represented by a separate column. By default there are 10 elevation classes in each glaciated gridcell. The upper elevation bounds (in meters) of these classes are 200, 400, 700, 1000, 1300, 1600, 2000, 2500, 3000, and 10000.
The atmospheric surface temperature and specific humidity are downscaled from the mean gridcell elevation to the column elevation using a user-specified lapse rate (typically 6 deg/km). At a given time, the lower-elevation columns can undergo surface melting while columns at higher elevations remain frozen. This results in a more accurate simulation of summer melting, which is a highly nonlinear function of air temperature. The precipitation rate and radiative fluxes are not currently downscaled, but they could be, if care were taken to preserve the cell-integrated values. At some point we would like to use a more sophisticated orographic downscaling scheme, but this would require significant recoding.
Standard CLM has an unrealistic treatment of accumulation and melting on ice sheets. The snow depth is limited to a prescribed depth of 1 m liquid water equivalent, with any additional snow assumed to run off instantaneously to the ocean. Snow melting is treated in a fairly realistic fashion, with meltwater percolating downward through snow layers as long as the snow is unsaturated. Once the underlying snow is saturated, any additional meltwater runs off. When glacier ice melts, however, the meltwater is assumed to remain in place until it refreezes. In warm parts of the ice sheet, the meltwater does not refreeze, but stays in place indefinitely.
In the modified CLM with glacier_mec columns, snow in excess of the prescribed maximum depth is converted to ice, contributing a positive surface mass balance to the ice sheet model. When ice melts, the meltwater is assumed to run off to the ocean, contributing a negative surface mass balance. The net SMB associated with ice formation (by conversion from snow) and melting/runoff is computed for each column, averaged over the coupling interval, and sent to the coupler. This quantity, denoted qice, is then passed to GLINT, along with the surface elevation topo in each column. GLINT downscales qice to the ice sheet grid, interpolating the values in adjacent elevation classes. The units of qice are mm/s, or equivalently km/m2/s. The downscaled quantities can be multiplied by a normalization factor to conserve mass exactly.
Note that the surface mass balance typically is defined as the total accumulation of ice and snow, minus the total ablation. The qice flux passed to GLINT is the mass balance for ice alone, not snow. We can think of CLM as owning the snow, whereas Glimmer owns the underlying ice. The snow depth can fluctuate between 0 and 1 m LWE without Glimmer needing to know about it.
In addition to qice and topo, the ground surface temperature tsfc is passed from CLM to GLINT via the coupler. This temperature serves as the upper boundary condition for Glimmer’s temperature calculation.
Given the SMB from the land model, Glimmer executes one or more dynamic time steps and returns the new ice sheet geometry to CLM via the coupler. The fields passed to the coupler are the ice sheet fractional area, surface elevation, and thickness, along with the conductive heat flux at the top surface and the runoff flux from basal melting and iceberg calving. GLINT upscales these fields from the ice sheet grid to the coarser land grid and bins them into elevation classes before sending them to the coupler.
The current coupling is one-way only. That is, CLM sends the SMB and surface temperature to GLINT but does not do anything with the fields that are returned. This is permissible for century-scale runs in which the geometry changes are modest. In order to do longer runs with large geometry changes, we need to enable two-way coupling. That work is in progress.
The purpose of the surface mass balance scheme is to provide Glimmer with a realistic upper surface boundary condition in past, present, and future climates. To the extent the present-day SMB is inaccurate (because of atmospheric biases, incomplete land model physics, or downscaling errors), the present-day ice sheet will have the wrong geometry, even if the ice sheet model is perfect. The greater the inaccuracy, the less confidence we will have in future projections.
So what is the quality of the results from the SMB scheme? Only recently have we had a working ice-sheet SMB scheme in CCSM4, so we are just beginning to find out. We will explore that question in the lab exercise.
Future ice sheet modeling
We have a simple working model of ice sheets in CCSM, but there is still a great deal of work to do. Here are a few of the projects under way:
- Glimmer-CISM was recently moved to a Subversion repository hosted by the BerliOS Open Source Mediator, as described by Magnus Hagdorn in his lecture. (See http://developer.berlios.de/projects/glimmer-cism/.) Model development is likely to proceed quickly during the next few years.
- The LANL ice sheet modeling group has received funding to develop a parallel version of Glimmer using state-of-the-art solver packages (e.g., PETSc and Trilinos) to efficiently solve the higher-order flow equations.
- DOE recently initiated a three-year project on computational advances in ice sheet modeling. Several groups have been funded to develop efficient, scalable solvers for higher-order approximations as well as the full-Stokes equations on unstructured and/or adaptive grids.
- We will attempt to couple WRF, a regional atmosphere model, to CLM and Glimmer in the CCSM framework. WRF can be run over Greenland or Antarctica with horizontal grid resolution of ~25 km, providing more realistic forcing fields than we can get from CAM at ~100 km.
- Several researchers, including a LANL group using Glimmer-CISM, are developing methods for coupling ice sheet models to ocean circulation models. The major challenges include (1) modifying the ocean upper boundary condition so that water can circulate beneath ice shelves, (2) changing the ocean topography as ice shelves advance and retreat, and (3) simulating realistic migration of the grounding line, which will require very fine grid resolution and/or improved numerical methods.
- A suite of climate change experiments using CCSM with dynamic ice sheets will be run during the next two years in preparation for IPCC AR5. Initially we will use the shallow-ice version of Glimmer, but we will transition to a higher-order code when an efficient parallel version is available.
These are just a few examples; many other projects are in the works. The next several years will be a time of rapid transition. Ice sheet models have long been less sophisticated than other climate model components, but Glimmer-CISM will likely be among the first climate model components to incorporate state-of-the-art meshing tools and scalable solvers. Atmosphere and ocean modelers may then look to ice sheet modelers for guidance instead of the other way around.
- Bougamont, M., Bamber, J.L., Ridley, J.K., Gladstone, R.M., Greuell, W., Hanna, E., Payne, A.J. and Rutt, I. 2007. Impact of model physics on estimating the surface mass balance of the Greenland ice sheet. Geophysical Research Letters 34: 10.1029/2007GL030700.
- Ghan, S.J., Shippert, T. and J. Fox, 2006. Physically based global downscaling: Regional evaluation. J. Climate 19: 429-445.
- Huybrechts, P., Letreguilly, A. and Reeh, N., 1990. The Greenland ice sheet and greenhouse warming. Palaeogeogr., Palaeoclimatol, Palaeoecol. (Global Planet. Change Sect.) 89: 399-412.
- Oppenheimer, M., O'Neill, B.C., Webster, M., and Agrawala, S., 2007. Climate change: The limits of consensus. Science 317 (5844): 1505.
- Pritchard, M. S., A. B. G. Bush, and S. J. Marshall, 2008. Neglecting ice-atmosphere interactions underestimates ice sheet melt in millennial-scale deglaciation simulations. Geophys. Res. Lett. 35, L01503, doi:10.1029/2007GL031738.
- van de Wal, R.S.W. 1996. Mass-balance modeling of the Greenland ice sheet: A comparison of an energy-balance and a degree-day model. Annals of Glaciology 23: 36-45.
Lab exercise: Running CCSM
Checkout, create case, configure, compile, and run the code
Log onto bluefire
Open a terminal window (Accessories -> Terminal)
> ssh -X -l logon_name bluefire.ucar.edu
When prompted for a Token Response, enter your Cryptocard password.
When asked for a terminal type, you can simply hit Return.
Hopefully you're now on bluefire. To see the contents of your home directory:
> ls -a
Check out the code
CCSM code is maintained on a Subversion repository. For CCSM as a whole and for each component, there is a main trunk along with many development branches. We will check out code from a branch with up-to-date versions of Glimmer and the land component, CLM, along with compatible versions of the other model components. This combination of CCSM components is identified by a unique branch tag.
To get the appropriate tagged version of CCSM from the Subversion repository:
The first time you do this, you'll need to enter your SVN password. If you do not have access to the CCSM repository, you can fill out the application here: http://www.ccsm.ucar.edu/working_groups/Software/secp/repo_access_form.shtml.
For more info on how to use Subversion, see http://subversion.tigris.org.
Create a case
> cd tag_name/scripts
Here, tag_name is glcec02_clm3_6_16.
The case we will run is created as follows:
> create_newcase -case case_name -res 1.9x2.5_gx1v5 -compset IG -mach bluefire -skip_rundb
(NOTE: in the "1.9x2.5_gx1v5" portion of the above, the "gx1v5" contains a number "one", not a small letter "L")
- case_name is something you make up
- res = resolution
- 1.9x2.5 = 1.9x2.5 degree grid for atmosphere, land
- 0.9x1.25 = 0.9x1.25 degree grid for atmosphere, land
- T31 = spectral T31 grid for atmosphere, land (good for debugging)
- gx1v5 = 1 degree grid, version 5 for ocean, sea ice
- gx3v5 = 3 degree grid, version 5 for ocean, sea ice (good for debugging)
- compset = set of active physical components
- A: all data models; no active physical components
- AG: active ice sheet
- I: active land
- IG: active land, ice sheet
- B: active land, atmosphere, ocean, sea ice
- BG: active land, atmosphere, ocean, sea ice, ice sheet
- mach = name of computer
- skip_rundb means that this is just a practice case that will not be documented in the run database.
For the IG case, you will have an active land component (CLM) and ice sheet component (Glimmer). The other components will be data models. The atmospheric data is from an NCEP reanalysis at T62 resolution (~1.5 deg).
For more information about how to create a case, see here:
> less README_quickstart
Configure the code
> cd case_name
Edit env_conf.xml and env_mach_pes if appropriate. (We won't need to do this for our example.)
> configure -case
Tour the code:
> cd ~/tag_name/models
Explore from there:
- atm = atmosphere
- ocn = ocean
- lnd = lnd
- ice = sea ice
- glc = ice sheet (Glimmer-CISM)
- drv = driver (includes coupler modules)
- csm_share = shared code
- utils = utilities
Build the code
Look at your environment variables:
TMPDIR should be set to /ptmp/$LOGNAME. This is scratch space where the code is built and output files are written.
> cd ~/tag_name/scripts/case_name
Edit env_build.xml if appropriate. (We won't need to do this.)
To build the code:
This will take a few minutes the first time. If you rebuild later after making minor changes, it will go much faster.
Hopefully the code will build. If not, you will get an error message pointing you to a build log file.
To see where the code has been built:
> cd /ptmp/logon_name/case_name
Run the code
> cd ~/tag_name/scripts/case_name
Edit env_run.xml if appropriate (e.g., STOPN and STOP_OPTION to set length of run)
- By default, STOP_OPTION = ndays and STOPN = 5. This means the code will run for 5 days--just long enough to make sure nothing is seriously broken.
Edit case_name.bluefire.run as appropriate
- -n Number of processors (do not change this)
- -q Run queue (premium is faster than regular but costs more)
- -W Run time requested (shorter => job will start sooner)
- -P Project code (should be 38481000 for the summer school)
For a 5-day run, we can initially set the run time to a small value (e.g. 0:05, or 5 minutes) so that the job runs quickly.
To submit the job:
> bsub < case_name.bluefire.run
To see whether the job is pending or running:
'No unfinished job found' means you're done.
If all goes well, the job will start and finish in a few minutes, and you will have some log files. First take a look at the poe.stdout file:
> less poe.stdout.6digits
The end of the file should say 'normal exit'.
Now let's check the log files:
> cd logs
There should be several files with the suffix gz, meaning that the files have been compressed, or zipped. Unzip the lnd.log file and take a look:
> gunzip lnd.log.timestamp.gz > less lnd.log.timestamp
For an IG case, the coupler, land, atmosphere, and ice sheet components (cpl, lnd, atm, and glc, respectively) have log files with diagnostic output. The logfile with the ccsm prefix combines diagnostics from each component.
Modify the code
Now that we know the basics, let's try a 10-year simulation. First, move back to the main directory for your model instance,
> cd ~/tag_name/scripts/case_name
In env_run.xml, set STOP_OPTION = nyear and STOP_N = 10. This run will take a couple of hours to complete, so we should change the run time estimate in case_name.bluefire.run (flag "-W"), from 0:05 to 2:00.
The code you checked out from the repository has the standard CLM values for bare ice albedo, which are too high. You should replace these with more realistic values. Edit this file:
Look for these lines:
data (albice(i),i=1,numrad) /0.80_r8, 0.55_r8/ !! data (albice(i),i=1,numrad) /0.50_r8, 0.50_r8/
Comment out the first line and uncomment the second line.
Then return to your case directory and rebuild the code:
> cd ~/tag_name/scripts/case_name > case_name.bluefire.build
Now we'll test the sensitivity of the ice sheet surface mass balance to changes in physical parameters and the input forcing. Each group will do its own run. When you're ready to do this, please let one of us know, and we'll assign an experiment written on the board. Here are some suggestions:
- Run with a different value of the bare ice albedo, albice. This variable is set in ~/tag_name/models/lnd/clm/src/main/clm_varcon.F90. Copy this file to ~/tag_name/scripts/case_name/SourceMods/src.clm. Edit the file in the SourceMods/src.clm directory; this file will automatically rewrite the original file when the code is built. Using the SourceMods directories is a good way to keep your changes separate from the base code.
- Run with a different value of the surface temperature lapse rate, lapse_glcmec. This variable is also set in clm_varcon.F90. Again, copy the file to ~/tag_name/scripts/case_name/SourceMods/src.clm and edit it there.
- Impose a uniform temperature perturbation. You can do this by modifying ~/tag_name/models/lnd/clm/src/biogeophys/DriverInitMod.F90, where the temperature is downscaled. Copy the file to ~/tag_name/scripts/case_name/SourceMods/src.clm and edit it there. Find this line of code:
tbot_c = tbot_g-lapse_glcmec*(hsurf_c-hsurf_g) ! sfc temp for column
Change it to something like this:
tbot_c = tbot_g-lapse_glcmec*(hsurf_c-hsurf_g) + 1.0_r8 ! sfc temp for column, plus one degree
You now have a crude version of a global warming simulation.
Once you've made your code changes in SourceMods, run the build script again:
> cd ~/tag_name/scripts/case_name > case_name.bluefire.build
If you get an error message, then edit the module appropriately and try again. If the code builds, then you're read to run:
> bsub < case_name.bluefire.run
We'll come back later to look at some results.
View the results
To see output from your run:
> cd /ptmp/logon_name/archive/case_name > ls > cd lnd > ls > cd hist > ls
You should have a history file for each month of your run.
Let's say we're interested in the surface mass balance of glaciated gridcells from year 10 of the run, averaged over 12 months.
We can post-process the data using NCO, a suite of programs for useful manipulation of netCDF files. For details, see http://nco.sourceforge.net/.
To average all the history variables over 12 months, use the ncra command:
> ncra -n 12,2,1 infile.nc outfile.nc
The -n command tells NCO to average over files that have the same name as infile.nc, apart from a numerical file identifier.
- The '12' indicates that there are 12 files to average.
- The '2' says that the identifier has 2 digits (01, 02, ..., 12)
- The '1' says that the identifier changes with a stride of 1.
The outfile name is arbitrary. In our case, we can type:
> ncra -n 12,2,1 case_name.clm2.h0.0010-01.nc case_name.clm2.h0.0010-avg.nc > ls
To view the contents of the new file:
> ncdump -h case_name.clm2.h0.0010-avg.nc | less
Hit the space bar to scroll through the output.
Note the following:
- The grid dimensions are 96 x 144.
- There are some time-independent variables (e.g., area, topo) with lower-case names.
- There are many time-dependent variables (including QICE, the surface mass balance) with names in all caps.
Now we can plot the data.
My favorite netCDF viewer is ferret, but ferret is not installed on bluefire.
Let's use ncview instead:
> ncview case_name.clm2.h0.0010-avg.nc
The ncview GUI should pop up. Click on the 2d vars button.
If the ncview GUI does not pop up, that's probably because your path isn't set up to find it. Try this instead:
> /contrib/bin/ncview case_name.clm2.h0.0010-avg.nc
As a shortcut, you can add an alias in your .cshrc file in your home directory.
alias ncview '/contrib/bin/ncview'
After you save the new version of .cshrc, you will need to type this:
> source .cshrc
Then the alias should work.
Unfortunately, there are so many variables that we can't get to QICE (one of the limitations of ncview).
Let's make a file that doesn't have so many variables:
> ncra -v QICE -n 12,2,1 case_name.clm2.h0.0010-01.nc case_name.clm2.h0.0010-QICE.nc
Here we have used the -v option to specify the variables to average over.
The resulting file has just one time-dependent variable QICE, a function of lat, lon and time:
> ncdump -h case_name.clm2.h0.0010-QICE.nc
Let's try ncview again:
> ncview case_name.clm2.h0.0010-QICE.nc
You should see a global plot of QICE on the global land grid.
To magnify the plot, left-click as many times as desired on the button that says M X3. To shrink the plot, right-click on this button.
We can see QICE for glaciated cells not only in Greenland and Antarctica, but also in the Himalayas, Canadian archipelago, Alaskan coastal range, and Patagonia (and New Zealand!).
The units of QICE are mm/s (or equivalently, kg/m2/s). If you prefer m/yr, you can change the units in the file using the ncflint command:
> ncflint -w 3.16e4,0 case_name.clm2.h0.0010-QICE.nc case_name.clm2.h0.0010-QICE.nc case_name.clm2.h0.0010-QICEmyr.nc
where the factor 3.16e4 converts from mm/s to m/yr.
(This syntax can be interpreted as follows. The form of the command is
> ncflint -w weight1, weight2 infile1.nc infile2.nc outfile.nc
with the result that variables in the output file have values outfile_var = weight1*infile1_var + weight2*infile2_var. If weight2 = 0, then infile2 is irrelevant and the effect is simply to multiply variables in infile1 by a constant. Perhaps there is a simpler way to do this. In ferret it is easy to multiply data by a constant without changing the netCDF file.)
You may want to look at other data fields in the monthly mean files and in the yearly average file.
As a final exercise, let's compute the surface mass balance integrated over the Greenland ice sheet. NCO has a command for this too:
> ncwa -N -v QICE -a lat,lon -B 'gris_mask > 0.5' -w area -N case_name.clm2.h0.0010-avg.nc outfile.nc
- -N says to compute the integrated total as opposed to the average.
- -v tells which variable(s) to sum and/or average over
- -a tells which dimensions to sum over
- -B says to sum only over cells that meet a masking condition (in our case, Greenland cells have gris_mask = 1.0, and all other cells have gris_mask = 0.0)
- -w says to weight by the variable that follows (grid cell area in this case)
Let's look at the output:
> ncdump outfile.nc
We're interested in the area-integrated value of QICE. Note that area has units of km2, whereas QICE has units of mm/s. To convert to km3/yr, multiply the result by 3.16e7 (the number of seconds in a year) and divide by 1e6 (the number of mm in a km). Recall that 1 km3 (liquid water equivalent) of ice weighs 1 Gigaton.
For the present-day (or at least preindustrial) climate of Greenland, the net surface mass balance is ~300 to 400 km3/y. How do your results compare?
- Control experiment (albice = 0.5, lapse = 0.006): QICE = 320 km3/yr
- Albedo = 0.40: QICE = 276 km3/yr
- Albedo = 0.60: QICE = 362 km3/yr