http://websrv.cs.umt.edu/isis/api.php?action=feedcontributions&user=Hoffman&feedformat=atomInteractive System for Ice sheet Simulation - User contributions [en]2021-03-05T17:32:43ZUser contributionsMediaWiki 1.21.1http://websrv.cs.umt.edu/isis/index.php/Cryptocard_holdersCryptocard holders2009-08-15T16:37:18Z<p>Hoffman: added myself</p>
<hr />
<div>Daniel Seneca Lindsey<br />
<br />
*297 Canyon Acres dr., Laguna Beach CA 92651, dlindsey@uci.edu<br />
<br />
<br />
Jesse Johnson <br />
<br />
*417 Social Science Building, 32 Ave Missoula MT, 59812, jesse.v.johnson@cs.umt.edu<br />
<br />
<br />
Florence Colleoni<br />
<br />
*Centro Euro-Mediterraneo per i Cambiamenti Climatici, Via Aldo Moro 44, 40127 Bologna, Italy, flocolleoni@gmail.com<br />
<br />
<br />
Kristin Poinar<br />
* University of Washington Dept. of Earth and Space Sciences, Box 351310, 4000 15th Ave. NE, Seattle, WA 98125. kpoinar@u.washington.edu<br />
<br />
<br />
Gethin Williams<br />
<br />
* School of Geographical Sciences, University of Bristol, University Road, Bristol BS8 1SS. United Kingdom. gethin.williams@bristol.ac.uk<br />
<br />
<br />
Saffia Hossainzadeh<br />
* University of California, Santa Cruz, Dept. of Earth and Planetary Sciences, 1156 High St., Santa Cruz, CA 95064, hoss@uchicago.edu<br />
<br />
<br />
Adam Campbell<br />
* University of Washington Dept. of Earth and Space Sciences, Box 351310, 4000 15th Ave. NE, Seattle, WA 98125. campbead@u.washington.edu<br />
<br />
<br />
Brian Anderson<br />
* Antarctic Research Centre, Victoria University of Wellington, PO Box 600, Wellington, New Zealand. brian.anderson@vuw.ac.nz<br />
<br />
<br />
Stefano Normani<br />
* Department of Civil and Environmental Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, Canada, N2L-3G1. sdnorman@uwaterloo.ca<br />
<br />
<br />
Ian Rutt<br />
* School of the Environment and Society, Swansea University, Singleton Park, Swansea, UK, SA2 8PP, i.c.rutt@swansea.ac.uk<br />
<br />
Todd K. Dupont<br />
* University of California, Department of Earth System Science, 3218 Croul Hall, MC 3100, Irvine, CA 92697. tdupont@uci.edu<br />
<br />
<br />
Charles Jackson<br />
* Institute for Geophysics, Univ. Texas, 10100 Burnet Rd. (R2200), Austin, Texas 78758-4445, charles@ig.utexas.edu<br />
<br />
Matthew J. Hoffman<br />
* Department of Geology, Portland State University, P.O. Box 751, Portland OR 97207-0751. hoffman@pdx.edu</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Coupling_the_Cryosphere_to_other_Earth_systems,_part_IICoupling the Cryosphere to other Earth systems, part II2009-08-14T22:49:55Z<p>Hoffman: /* Simulation results */</p>
<hr />
<div>==Back to [[Summer Modeling School]]==<br />
<br />
<br />
<br />
<br />
==Ice sheets in the Community Climate System Model==<br />
<br />
===A brief introduction to CCSM===<br />
<br />
<br />
[[Image:ccsm.jpg|thumb|right|300 px|[[Media:Lipscomb_Talk.pdf]]<br>Ice sheets in the Community Climate System Model]]<br />
<br />
The Community Climate System Model (CCSM; http://www.ccsm.ucar.edu/) is one of three U.S. global climate models (GCMs) featured prominently in the assessment reports of the Intergovernmental Panel on Climate Change (IPCC). The others are the NASA GISS model and the NOAA GFDL model. (GISS is the Goddard Institute for Space Studies in New York City, and GFDL is the Geophysical Fluid Dynamics Laboratory in Princeton, N.J.) The GISS and GFDL models have been developed primarily at those institutions, but CCSM, as the name suggests, is a broad community effort. Although model development is centered at the National Center for Atmospheric Research (NCAR) in Boulder, there have been substantial contributions from scientists at several national laboratories and numerous universities, with support from the Department of Energy (DOE) and the National Science Foundation (NSF). <br />
<br />
CCSM has a hub-and-spoke design. Recent model versions have had four physical components—atmosphere, land, ocean, and sea ice—that communicate through a coupler. The current CCSM components are the Community Atmosphere Model (CAM), the Community Land Model (CLM), the Parallel Ocean Program (POP), and the Community Ice Code (CICE). POP and CICE were developed primarily by scientists in the Climate, Ocean and Sea Ice Modeling (COSIM) group at Los Alamos National Laboratory (LANL), where I work. <br />
<br />
I have recently added the Glimmer ice sheet model as a fifth physical component, but it is not yet part of the officially released code. I’ll say more about CCSM and ice sheets below.<br />
<br />
(An historical aside: Why are ocean and ice models developed at a nuclear weapons lab in the high desert of New Mexico? The short answer is that many of the computational methods and hardware used in weapons simulations are useful for climate modeling. COSIM was founded when the Cold War was winding down and a LANL scientist named Bob Malone, who had been studying nuclear winter, decided to develop a parallel ocean model.)<br />
<br />
In principle, each physical component lives on its own grid, though in practice the atmosphere and land components usually share one horizontal grid, and the ocean and sea ice components share another. CCSM is always run in parallel, on anywhere from ~10 to ~10,000 processors. The components can be run either concurrently (all at the same time, but on different sets of processors) or sequentially (one after the other, with each component using all the available processors). <br />
<br />
Each model component sends to and receives from the coupler a number of 2D fields located at the component interfaces. These fields include upwelling and downwelling shortwave and longwave radiation, air temperature, specific humidity, pressure, wind speed, ocean velocity, sea surface temperature and salinity, sea ice concentration, surface albedo, etc. The coupler can map fields from one component domain to another (e.g., from the atmosphere grid to the ocean grid) as well as merge fields from more than one component (e.g., the area-weighted albedos from the ocean and sea ice models, which are combined into a single field for the benefit of the atmosphere). Also, the coupler may be responsible for deriving fluxes (e.g., sensible and latent heat fluxes) from other fields.<br />
<br />
CCSM is managed by a Scientific Steering Committee. There are twelve working groups that focus on different aspects of model development and application. The newest group is the Land Ice Working Group (LIWG), which is responsible for developing the CCSM ice sheet component and for using the model to predict sea-level rise caused by the loss of land ice. See here for details: http://www.ccsm.ucar.edu/working_groups/Land+Ice/<br />
<br />
The CCSM community meets once a year, usually in June in Breckenridge, Colorado. In addition, each working group holds a winter meeting, usually in Boulder. You are cordially invited to attend the next meeting of the LIWG, which will be held in conjunction with the CCSM Polar Climate Working Group and with the SeaRISE sea-level assessment group. Contact one of the LIWG co-chairs, Jesse Johnson or Bill Lipscomb, for details.<br />
<br />
===CCSM, the IPCC, and sea-level rise===<br />
<br />
Development of CCSM and other GCMs is driven largely by the IPCC timetable. The fourth assessment report, AR4, was released in 2007, and the next report, AR5, is scheduled for 2013. The final form of CCSM version 4, which will be used for AR5 simulations, was determined just a few weeks ago. The control climate simulations are under way, and climate change runs will begin shortly. Most of these runs will be completed by sometime next year. Scientists then have a year or so to analyze and publish results in time to be considered for AR5. <br />
<br />
The IPCC schedule is not always conducive to long-term model development. Also, there are concerns that the IPCC reports are too focused on obtaining consensus as opposed to exploring uncertainties. (See, e.g., Oppenheimer et al. 2007.) As a result, the reports may downplay the risks of potentially large and abrupt climate changes such as megadroughts, methane clathrate release, and sea-level rise. But at least for now, these assessments are the primary mechanism for communicating results to policymakers and the public. <br />
<br />
Global sea level is rising at a rate of ~2.5 to 3 mm/yr (i.e., 25 to 30 cm/century), with significant contributions from ocean thermal expansion as well as melting of mountain glaciers and ice sheets. Recent observations have established that the Greenland and West Antarctic ice sheets are losing mass at an accelerating rate. IPCC AR4 projected a 21st century sea-level rise of 18 to 59 cm under a broad range of greenhouse emissions scenarios. Notoriously, these projections specifically excluded the possibility of “rapid dynamical changes in ice flow” because “understanding of these effects is too limited to assess their likelihood or provide a best estimate or upper bound for sea level rise.”<br />
<br />
Since the release of AR4, there has been considerable pressure on the climate modeling centers and national funding agencies to do a better job at predicting ice-sheet retreat and sea-level rise. Until recently, most GCMs did not have dynamic ice sheets, because it was assumed that ice sheets would not contribute significantly to climate change or sea-level rise on time scales of decades to centuries. Now that this assumption has come under question, the modeling centers (or modelling centres, if you prefer) are scrambling to add ice sheet models. Both CCSM and the U.K. Hadley Centre model will be using Glimmer, with the close involvement of several of the summer school instructors. <br />
<br />
Two major community efforts are under way to assess the future ice-sheet contribution to sea-level rise and try to narrow the range of uncertainty. The European Union is supporting a large multinational effort called Ice2sea (http://www.ice2sea.eu/). Bob Bindschadler of NASA is leading a broad but less formal effort called SeaRISE (Sea-level Response to Ice Sheet Evolution; http://websrv.cs.umt.edu/isis/index.php/SeaRISE_Assessment.)<br />
<br />
===Ice sheets in CCSM===<br />
<br />
In 2005 I submitted a proposal to incorporate an ice sheet model in CCSM. After conversations with Tony Payne, Ian Rutt, and others, I decided to work with Glimmer, which had been designed specifically for coupling to climate models. I thought the coupling could be done in a year or so, which turned out to be a serious underestimate of the project complexities (or at least an overestimate of my ability to carry out a complex project). Four years later, there is still some work to do, but we finally have a version of CCSM that is more or less ready for climate simulations with dynamic ice sheets.<br />
<br />
Ian Rutt and Magnus Hagdorn described the Glimmer code in detail during the Wednesday lectures. During the past two years, Jesse Johnson, Steve Price, and others have made great strides in developing a Community Ice Sheet Model (CISM) based on Glimmer. These developments—in particular, the implementation of a higher-order momentum balance—are described in Steve’s lecture notes and on the U. Montana ice sheet web site (http://websrv.cs.umt.edu/isis/index.php/Main_Page). The new and improved model, known as Glimmer-CISM, will be incorporated in CCSM this fall. Model development is continuing under the direction of a steering committee that includes Tony Payne, Ian Rutt, and Magnus Hagdorn in the U.K., along with Jesse Johnson, Steve Price, and me in the U.S.<br />
<br />
Glimmer has been configured for coupled CCSM simulations with a dynamic Greenland ice sheet. Since there are some added difficulties in coupling a marine-based ice sheet to a GCM, we are not yet able to run coupled simulations with a dynamic Antarctic ice sheet. Ultimately, however, we plan to simulate both Greenland and Antarctica, as well as paleo ice sheets. <br />
My focus in the rest of this document will be not on Glimmer-CISM, but on changes made in CCSM to compute the surface mass balanced of ice sheets.<br />
<br />
===Simulating the surface mass balance of ice sheets===<br />
<br />
We can think of Glimmer as having two main physical components:<br />
*a surface mass balance (SMB) scheme, which computes accumulation and ablation at the upper ice/snow surface. Ablation is defined as the amount of water that runs off to the ocean. Not all the surface meltwater runs off; some of the melt percolates into the snow and refreezes.<br />
*a dynamic component, which computes ice velocities and the resulting evolution of the ice-sheet geometry and temperature fields. <br />
<br />
The dynamic component of Glimmer is called GLIDE. The surface mass balance calculations are part of GLINT, the Glimmer interface. GLINT receives the required fields from a climate model or meteorological data set, accumulates and averages the data over a specified time period, and downscales the data to the finer Glimmer grid. (The land and atmosphere models typically run at a grid resolution of ~100 km, whereas ice sheet models require a grid resolution of ~10 km.) The downscaled data is used to compute the surface mass balance, which is passed to GLIDE.<br />
<br />
There are two broad classes of surface mass balance schemes:<br />
*positive-degree-day (PDD) schemes, in which the melting is parameterized as a linear function of the number of degree-days above the freezing temperature. The proportionality factor is empirical and may vary in time and space. This factor is larger for bare ice than for snow, since ice has a lower albedo. <br />
*surface energy-balance (SEB) schemes, in which the melting depends on the sum of the radiative, turbulent, and conductive fluxes reaching the surface. SEB schemes are more physically realistic than PDD schemes, but also are more expensive and complex. <br />
<br />
Glimmer has a PDD scheme based on that of Huybrechts et al. (1991) and others. (See the Glimmer documentation for details.) PDD schemes are not ideal for climate change studies, because empirical degree-day factors could change in a warming climate. Comparisons of PDD and energy-balance schemes (e.g., van de Wal 1996; Bougamont et al. 2007) suggest that PDD schemes may be overly sensitive to warming temperatures. In fact, Bougamont et al. found that a PDD scheme generates runoff rates nearly twice as large as those computed by an SEB scheme. If we want a credible climate change simulation for the Greenland ice sheet, we should use an energy-balance scheme.<br />
<br />
Glimmer does not currently have an SEB scheme, but might include one in the future. If such a scheme were available, one approach to computing surface melting would be as follows: The incoming shortwave and longwave fluxes, temperature, and humidity would be passed from the CCSM atmosphere to GLINT via the coupler. These fields would be downscaled to the ice sheet grid, using an assumed lapse rate to interpolate temperatures to the appropriate elevations on the ice sheet grid. The surface mass balance would then be computed from the downscaled atmosphere fields combined with a detailed snow model.<br />
<br />
This approach is sensible if one is working with meteorological data, e.g. from atmospheric reanalysis data. In CCSM, however, the preferred approach is to compute the surface mass balance for ice sheets in CLM, the CCSM land component, on the coarse-resolution land grid. To improve accuracy on the coarse grid, the mass balance is computed for ~10 elevation classes in each gridcell. The mass balance for each elevation class is accumulated and averaged over a coupling interval (typically ~1 day), then passed to GLINT via the coupler. GLINT accumulates and averages the mass balance over a longer interval (typically 1 year) and downscales it to the ice sheet grid. The ice sheet evolves dynamically, then returns the new ice geometry to CLM via the coupler.<br />
<br />
====Motivation for a surface mass balance scheme in CLM====<br />
There are several advantages to computing the surface mass balance in CLM as opposed to GLINT: <br />
#It is much cheaper to compute the SMB in CLM for ~10 elevation classes than in GLINT/Glimmer. For example, suppose we are running CLM at a resolution of ~50 km and Glimmer at ~5 km. Greenland has dimensions of about 1000 x 2000 km. For CLM we would have 20 x 40 x 10 = 8,000 columns, whereas for GLIMMER we would have 200 x 400 = 80,000 columns. Jeff Ridley of the Hadley Centre has found that running an SMB model on the ice sheet grid is as expensive as the rest of the GCM combined. Ghan and others (add ref) have shown that elevation classes give results comparable to those obtained at much greater expense on a finer grid.<br />
#We take advantage of the fairly sophisticated snow physics parameterization already in CLM instead of implementing a separate scheme for GLIMMER. When the CLM scheme is improved, the improvements are applied to ice sheets automatically.<br />
#The atmosphere model can respond during runtime to ice-sheet surface changes. As shown by Pritchard et al. (2008), runtime albedo feedback from the ice sheet is critical for simulating ice-sheet retreat on paleoclimate time scales. Without this feedback, the atmosphere warms much less, and the retreat is delayed.<br />
#Mass is conserved, in that the rate of surface ice growth or melting computed in CLM is equal to the rate seen by the dynamic ice sheet model.<br />
#The improved surface mass balance is available in CLM for all glaciated grid cells (e.g., in the Alps, Rockies, Andes, and Himalayas), not just those which are part of ice sheets.<br />
<br />
====Details of the new SMB scheme====<br />
As it happens, CLM has a hierarchical data structure that makes it relatively straightforward to model glaciated regions with multiple elevation classes. In the standard version of CLM, each gridcell is partitioned into one or more of five landunit types: vegetated, lake, wetland, urban, and glacier. Each landunit consists of a user-defined number of columns, and each column has its own vertical profile of temperature and water content.<br />
<br />
I created a sixth landunit, denoted glacier_mec, where “mec” stands for “multiple elevation classes.” Glacier_mec landunits are similar to glacier landunits, except that each elevation class is represented by a separate column. By default there are 10 elevation classes in each glaciated gridcell. The upper elevation bounds (in meters) of these classes are 200, 400, 700, 1000, 1300, 1600, 2000, 2500, 3000, and 10000. <br />
<br />
The atmospheric surface temperature and specific humidity are downscaled from the mean gridcell elevation to the column elevation using a user-specified lapse rate (typically 6 deg/km). At a given time, the lower-elevation columns can undergo surface melting while columns at other elevations (including the mean) remain frozen. This results in a more accurate simulation of summer melting, which is a highly nonlinear function of air temperature. The precipitation rate and radiative fluxes are not currently downscaled, but they could be, if care were taken to preserve the cell-integrated values. At some point we would like to use a more sophisticated orographic downscaling scheme, but this would require significant recoding.<br />
<br />
Standard CLM has an unrealistic treatment of accumulation and melting on ice sheets. The snow depth is limited to a prescribed depth of 1 m liquid water equivalent, with any additional snow assumed to run off instantaneously to the ocean. Snow melting is treated in a fairly realistic fashion, with meltwater percolating downward through snow layers as long as the snow is unsaturated. Once the underlying snow is saturate, any additional meltwater runs off. When glacier ice melts, however, the meltwater is assumed to remain in place until it refreezes. In warm parts of the ice sheet, the meltwater does not refreeze, but stays in place indefinitely. <br />
<br />
In the modified CLM with glacier_mec columns, snow in excess of the prescribed maximum depth is converted to ice, contributing a positive surface mass balance to the ice sheet model. When ice melts, the meltwater is assumed to run off to the ocean, contributing a negative surface mass balance. The net SMB associated with ice formation (by conversion from snow) and melting/runoff is computed for each column, averaged over the coupling interval, and sent to the coupler. This quantity, denoted ''qice'', is then passed to GLINT, along with the surface elevation topo in each column. GLINT downscales ''qice'' to the ice sheet grid, interpolating the values in adjacent elevation classes. The units of ''qice'' are mm/s, or equivalently km/m<sup>2</sup>/s. The downscaled quantities can be multiplied by a normalization factor to conserve mass exactly. <br />
<br />
Note that the surface mass balance typically is defined as the total accumulation of ice and snow, minus the total ablation. The ''qice'' flux passed to GLINT is the mass balance for ice alone, not snow. We can think of CLM as owning the snow, whereas Glimmer owns the underlying ice; hence Glimmer only needs to be told when the ice volume changes. The snow depth can fluctuate between 0 and 1 m LWE without Glimmer needing to know about it.<br />
<br />
In addition to ''qice'' and topo, the ground surface temperature tsfc is passed from CLM to GLINT via the coupler. This temperature serves as the upper boundary condition for Glimmer’s temperature calculation.<br />
<br />
Given the SMB from the land model, Glimmer executes one or more dynamic time steps and returns the new ice sheet geometry to CLM via the coupler. The fields passed to the coupler are the ice sheet fractional area, surface elevation, and thickness, along with the conductive heat flux at the top surface and the runoff flux from basal melting and iceberg calving. GLINT upscales these fields from the ice sheet grid to the coarser land grid and bins them into elevation classes before sending them to the coupler. <br />
<br />
The current coupling is one-way only. That is, CLM sends the SMB and surface temperature to GLINT but does not do anything with the fields that are returned. This is permissible for century-scale runs in which the geometry changes are modest. In order to do longer runs with large geometry changes, we need to enable two-way coupling. That work is in progress.<br />
<br />
The purpose of the surface mass balance scheme is to provide Glimmer with a realistic upper surface boundary condition in past, present, and future climates. To the extent the present-day SMB is inaccurate (because of atmospheric biases, incomplete land model physics, or downscaling errors), the present-day ice sheet will have the wrong geometry, even if the ice sheet model is perfect. The greater the inaccuracy, the less confidence we will have in future projections. <br />
<br />
So what is the quality of the results from the SMB scheme? Only recently have we had a working ice-sheet SMB scheme in CCSM4, so we are just beginning to find out. We will explore that question in the lab exercise.<br />
<br />
===Future ice sheet modeling===<br />
<br />
We have a simple working model of ice sheets in CCSM, but there is still a great deal of work to do. Here are a few of the projects under way:<br />
<br />
*Glimmer-CISM was recently moved to a Subversion repository hosted by the BerliOS Open Source Mediator, as described by Magnus Hagdorn in his lecture. (See http://developer.berlios.de/projects/glimmer-cism/.) Model development is likely to proceed quickly during the next few years.<br />
*The LANL ice sheet modeling group has received funding to develop a parallel version of Glimmer using state-of-the-art solver packages (e.g., PETSc and Trilinos) to efficiently solve the higher-order flow equations.<br />
*DOE recently initiated a three-year project on computational advances in ice sheet modeling. Several groups have been funded to develop efficient, scalable solvers for higher-order approximations as well as the full-Stokes equations on unstructured and/or adaptive grids.<br />
*We will attempt to couple WRF, a regional atmosphere model to CLM and Glimmer in the CCSM framework. WRF can be run over Greenland or Antarctica with horizontal grid resolution of ~25 km, providing more realistic forcing fields than we can get from CAM at ~100 km. <br />
*Several researchers, including a LANL group using Glimmer-CISM, are developing methods for coupling ice sheet models to ocean circulation models. The major challenges include (1) modifying the ocean upper boundary condition so that water can circulate beneath ice shelves, (2) changing the ocean topography as ice shelves advance and retreat, and (3) simulating realistic migration of the grounding line, which will require very fine grid resolution and/or improved numerical methods.<br />
*A suite of climate change experiments using CCSM with dynamic ice sheets will be run during the next two years in preparation for IPCC AR5. Initially we will use the shallow-ice version of Glimmer, but we will transition to a higher-order code when an efficient parallel version is available.<br />
<br />
These are just a few examples; many other projects are in the works. The next several years will be a time of rapid transition. Ice sheet models have long been less sophisticated than other climate model components, but Glimmer-CISM will likely be among the first climate model components to incorporate state-of-the-art meshing tools and scalable solvers. Atmosphere and ocean modelers may then look to ice sheet modelers for guidance instead of the other way around.<br />
<br />
===References===<br />
<br />
*Bougamont, M., Bamber, J.L., Ridley, J.K., Gladstone, R.M., Greuell, W., Hanna, E., Payne, A.J. and Rutt, I. 2007. Impact of model physics on estimating the surface mass balance of the Greenland ice sheet. Geophysical Research Letters 34: 10.1029/2007GL030700.<br />
*Ghan, S.J., Shippert, T. and J. Fox, 2006. Physically based global downscaling: Regional evaluation. J. Climate 19: 429-445.<br />
*Huybrechts, P., Letreguilly, A. and Reeh, N., 1990. The Greenland ice sheet and greenhouse warming. Palaeogeogr., Palaeoclimatol, Palaeoecol. (Global Planet. Change Sect.) 89: 399-412.<br />
*Oppenheimer, M., O'Neill, B.C., Webster, M., and Agrawala, S., 2007. Climate change: The limits of consensus. Science 317 (5844): 1505.<br />
*Pritchard, M. S., A. B. G. Bush, and S. J. Marshall, 2008. Neglecting ice-atmosphere interactions underestimates ice sheet melt in millennial-scale deglaciation simulations. Geophys. Res. Lett. 35, L01503, doi:10.1029/2007GL031738.<br />
*van de Wal, R.S.W. 1996. Mass-balance modeling of the Greenland ice sheet: A comparison of an energy-balance and a degree-day model. Annals of Glaciology 23: 36-45.<br />
<br />
<br />
<br />
==Lab exercise: Running CCSM==<br />
<br />
<br />
===Checkout, create case, configure, compile, and run the code===<br />
<br />
====Log onto bluefire====<br />
<br />
Open a terminal window (Accessories -> Terminal)<br />
<br />
> ssh -X -l ''logon_name'' bluefire.ucar.edu<br />
<br />
When prompted for a Token Response, enter your Cryptocard password.<br />
<br />
When asked for a terminal type, you can simply hit ''Return''.<br />
<br />
Hopefully you're now on bluefire. To see the contents of your home directory:<br />
<br />
> ls -a<br />
<br />
====Check out the code====<br />
<br />
CCSM code is maintained on a Subversion repository. For CCSM as a whole and for each component, there is a main trunk along with many development branches. We will check out code from a branch with up-to-date versions of Glimmer and the land component, CLM, along with compatible versions of the other model components. This combination of CCSM components is identified by a unique branch tag.<br />
<br />
To get the appropriate tagged version of CCSM from the Subversion repository:<br />
<br />
> svn co https://svn-ccsm-models.cgd.ucar.edu/clm2/branch_tags/glcec_tags/glcec02_clm3_6_16/<br />
<br />
For more info on how to use Subversion, see http://subversion.tigris.org<br />
<br />
The first time you do this, you'll need to enter your SVN password. (Summer school students may not have been given passwords. In that case we can direct you to a tarball instead.)<br />
<br />
The work around for not having a password to the svn server is to:<br />
<br />
cp /blhome/lipscomb/summer_school_directory/glcec02_clm3_6_16.tar .<br />
<br />
and then untar the directory<br />
<br />
tar xvf glcec02_clm3_6_16.tar<br />
<br />
You will also have to change the permissions in this archive. Use<br />
<br />
chmod -R +w glcec02_clm3_6_16/<br />
<br />
====Create a case====<br />
<br />
> ls<br />
<br />
> cd ''tag_name''/scripts<br />
<br />
For information about how to create a case, see here:<br />
<br />
> less README_quickstart !! NOTE: you don't need to follow these instructions <br />
!! ... follow the wiki instructions below instead ...<br />
<br />
The case we will run is created as follows:<br />
<br />
> create_newcase -case ''case_name'' -res 1.9x2.5_gx1v5 -compset IG -mach bluefire -skip_rundb<br />
<br />
(NOTE: in the "1.9x2.5_gx1v5" portion of the above, the "gx1v5" contains a number "one", not a small letter "L")<br />
<br />
where <br />
<br />
*''case_name'' is something you make up--long enough to be descriptive but not too long to type repeatedly.<br />
<br />
*''res'' = resolution <br />
**1.9x2.5 = 1.9x2.5 degree grid for atmosphere, land<br />
**0.9x1.25 = 0.9x1.25 degree grid for atmosphere, land<br />
**T31 = spectral T31 grid for atmosphere, land (good for debugging)<br />
**gx1v5 = 1 degree grid, version 5 for ocean, sea ice <br />
**gx3v5 = 3 degree grid, version 5 for ocean, sea ice (good for debugging)<br />
<br />
*''compset'' = set of active physical components<br />
**A: all data models; no active physical components<br />
**AG: active ice sheet<br />
**I: active land<br />
**IG: active land, ice sheet<br />
**B: active land, atmosphere, ocean, sea ice<br />
**BG: active land, atmosphere, ocean, sea ice, ice sheet<br />
<br />
*''mach'' = name of computer<br />
<br />
*''skip_rundb'' means that this is just a practice case that will not be documented in the run database.<br />
<br />
For the IG case, you will have an active land component (CLM) and ice sheet component (Glimmer). The other components will be data models. The atmospheric data is from an NCEP reanalysis at T62 resolution (~1.5 deg).<br />
<br />
====Configure the code====<br />
<br />
> cd ''case_name''<br />
<br />
Edit env_conf.xml and env_mach_pes if appropriate. (We won't need to do this for our example.)<br />
<br />
> configure -case<br />
<br />
Tour the code: <br />
<br />
> cd ~/''tag_name''/models<br />
<br />
> ls<br />
<br />
Explore from there:<br />
*atm = atmosphere<br />
*ocn = ocean<br />
*lnd = lnd<br />
*ice = sea ice<br />
*glc = ice sheet (Glimmer-CISM)<br />
*drv = driver (includes coupler modules)<br />
*csm_share = shared code<br />
*utils = utilities<br />
<br />
====Build the code====<br />
<br />
Look at your environment variables:<br />
<br />
> env<br />
<br />
TMPDIR should be set to /ptmp/$LOGNAME.<br />
This is scratch space where the code is built and output files are written.<br />
<br />
> cd ~/''tag_name''/scripts/''case_name''<br />
<br />
Edit env_build.xml if appropriate. (We won't need to do this.)<br />
<br />
To build the code:<br />
<br />
> ''case_name''.bluefire.build<br />
<br />
This will take a few minutes the first time. If you rebuild later after making minor changes, it will go much faster. <br />
<br />
Hopefully the code will build. If not, you will get an error message pointing you to a build log file.<br />
<br />
To see where the code has been built:<br />
<br />
> cd /ptmp/''logon_name''/''case_name''<br />
<br />
====Run the code====<br />
<br />
> cd ~/''tag_name''/scripts/''case_name''<br />
<br />
Edit env_run.xml if appropriate (e.g., STOPN and STOP_OPTION to set length of run)<br />
<br />
*By default, STOP_OPTION = ''ndays'' and STOPN = 5. This means the code will run for 5 days--just long enough to make sure nothing is seriously broken. <br />
<br />
Edit ''case_name''.bluefire.run as appropriate<br />
<br />
BSUB commands:<br />
*-n {{pad|4em}} Number of processors (do not change this)<br />
*-q {{pad|4em}} Run queue (premium is faster than regular but costs more)<br />
*-W {{pad|4em}} Run time requested (shorter => job will start sooner)<br />
*-P {{pad|4em}} Project code<br />
<br />
For a 5-day run, we can set the run time to a small value (e.g. 0:05, or 5 minutes) so that the job runs quickly.<br />
<br />
Set the queue to ''''premium''''.<br />
<br />
Our project code is 38481000. If this code is not already in the run script, you'll need to enter it manually.<br />
<br />
To submit the job:<br />
<br />
> bsub < ''case_name''.bluefire.run<br />
<br />
To see whether the job is pending or running:<br />
<br />
> bjobs <br />
<br />
'No unfinished job found' means you're done<br />
<br />
If all goes well, the job will start and finish in a few minutes, and you will have some log files. First take a look at the poe.stdout file:<br />
<br />
> less poe.stdout.''6digits''<br />
<br />
The end of the file should say 'normal exit'.<br />
<br />
Now let's check the log files:<br />
<br />
> cd logs<br />
<br />
There should be several files with the suffix ''gz'', meaning that the files have been compressed, or zipped. Unzip the ''lnd.log'' file and take a look:<br />
<br />
> gunzip lnd.log.''timestamp''.gz<br />
> less lnd.log.''timestamp''<br />
<br />
For an IG case, the coupler, land, atmosphere, and ice sheet components (cpl, lnd, atm, and glc, respectively) have log files with diagnostic output. The logfile with the ''ccsm'' prefix combines diagnostics from each component.<br />
<br />
====Modify the code====<br />
<br />
Now that we know the basics, let's try a 10-year simulation. First, move back to the main directory for your model instance, <br />
<br />
> cd ~/tag_name/scripts/case_name <br />
<br />
In env_run.xml, set STOP_OPTION = nyear and STOP_N = 10. This will take a couple of hours to run, so we should change the run time estimate in ''case_name''.bluefire.run (flag "-W"), from 0:05 to 2:00.<br />
<br />
The code you checked out from the repository has the standard CLM values for bare ice albedo, which are too high. You should replace these with more realistic values. Edit this file:<br />
<br />
> ~/''tag_name''/models/lnd/clm/src/main/clm_varcon.F90<br />
<br />
Look for these lines:<br />
<br />
data (albice(i),i=1,numrad) /0.80_r8, 0.55_r8/<br />
!! data (albice(i),i=1,numrad) /0.50_r8, 0.50_r8/<br />
<br />
Comment out the first line and uncomment the second line.<br />
<br />
Then return to your case directory and rebuild the code:<br />
<br />
> cd ~/''tag_name''/scripts/''case_name''<br />
> ''case_name''.bluefire.build<br />
<br />
Now we'll test the sensitivity of the ice sheet surface mass balance to changes in physical parameters and the input forcing. Each group will do its own run. When you're ready to do this, please let one of us know, and we'll assign an experiment written on the board. Here are some suggestions:<br />
<br />
#Run with a different value of the bare ice albedo, ''albice''. This variable is set in ~/''tag_name''/models/lnd/clm/src/main/clm_varcon.F90. Copy this file to ~/''tag_name''/scripts/''case_name''/SourceMods/src.clm. Edit the file in the SourceMods directory; this file will automatically rewrite the original file when the code is built. Using the SourceMods directories is a good way to keep your changes separate from the base code.<br />
#Run with a different value of the surface temperature lapse rate, ''lapse_glcmec''. This variable is also set in clm_varcon.F90.<br />
#Impose a uniform temperature perturbation. You can do this by modifying ~/''tag_name''/models/lnd/clm/src/biogeophys/DriverInitMod.F90, where the temperature is downscaled. Copy the file to the SourceMods directory and edit it there. Find this line of code:<br />
<br />
tbot_c = tbot_g-lapse_glcmec*(hsurf_c-hsurf_g) ! sfc temp for column<br />
<br />
Change it to something like this:<br />
<br />
tbot_c = tbot_g-lapse_glcmec*(hsurf_c-hsurf_g) + 1.0_r8 ! sfc temp for column, plus one degree<br />
<br />
You now have a crude version of a global warming simulation.<br />
<br />
Once you've made your code changes in SourceMods, run the build script again:<br />
<br />
> cd ~/''tag_name''/scripts/''case_name''<br />
> ''case_name''.bluefire.build<br />
<br />
If you get an error message, then edit the module appropriately and try again. If the code builds, then you're read to run:<br />
<br />
> bsub < ''case_name''.bluefire.run<br />
<br />
We'll come back later to look at some results.<br />
<br />
===View the results===<br />
<br />
To see output from your run:<br />
<br />
> cd /ptmp/''logon_name''/archive/''case_name''<br />
> ls<br />
> cd lnd<br />
> ls<br />
> cd hist<br />
> ls<br />
<br />
You should have a history file for each month of your run.<br />
<br />
Let's say we're interested in the surface mass balance of glaciated gridcells from year 10 of the run, averaged over 12 months.<br />
<br />
We can post-process the data using NCO, a suite of programs for useful manipulation of netCDF files. For details, see http://nco.sourceforge.net/.<br />
<br />
To average all the history variables over 12 months, use the ''ncra'' command:<br />
<br />
> ncra -n 12,2,1 ''infile.nc'' ''outfile.nc''<br />
<br />
The ''-n'' command tells NCO to average over files that have the same name as infile.nc, apart from a numerical file identifier.<br />
<br />
* The '12' indicates that there are 12 files to average.<br />
* The '2' says that the identifier has 2 digits (01, 02, ..., 12)<br />
* The '1' says that the identifier changes with a stride of 1.<br />
<br />
The outfile name is arbitrary. In our case, we can type:<br />
<br />
> ncra -n 12,2,1 ''case_name''.clm2.h0.0010-01.nc ''case_name''.clm2.h0.0010-avg.nc<br />
> ls<br />
<br />
To view the contents of the new file:<br />
<br />
> ncdump -h ''case_name''.clm2.h0.0010-avg.nc | less<br />
<br />
Hit the space bar to scroll through the output.<br />
<br />
Note the following:<br />
<br />
*The grid dimensions are 96 x 144.<br />
*There are some time-independent variables (e.g., area, topo) with lower-case names.<br />
*There are many time-dependent variables (including QICE, the surface mass balance) with names in all caps.<br />
<br />
Now we can plot the data.<br />
<br />
My favorite netCDF viewer is ferret, but ferret is not installed on bluefire.<br />
<br />
Let's use ncview instead:<br />
<br />
> ncview ''case_name''.clm2.h0.0010-avg.nc<br />
<br />
The ncview GUI should pop up. Click on the ''2d vars'' button.<br />
<br />
If the ncview GUI does not pop up, that's probably because your path isn't set up to find it. Try this instead:<br />
<br />
> /contrib/bin/ncview ''case_name''.clm2.h0.0010-avg.nc<br />
<br />
As a shortcut, you can add an alias in your .cshrc file in your home directory.<br />
<br />
alias ncview '/contrib/bin/ncview'<br />
<br />
After you save the new version of .cshrc, you will need to type this:<br />
<br />
> source .cshrc<br />
<br />
Then the alias should work.<br />
<br />
Unfortunately, there are so many variables that we can't get to QICE (one of the limitation of ncview).<br />
<br />
Let's make a file that doesn't have so many variables:<br />
<br />
> ncra -v QICE -n 12,2,1 ''case_name''.clm2.h0.0010-01.nc ''case_name''.clm2.h0.0010-QICE.nc<br />
<br />
Here we have used the ''-v'' option to specify the variables to average over.<br />
<br />
The resulting file has just one time-dependent variable QICE, a function of lat, lon and time:<br />
<br />
> ncdump -h ''case_name''.clm2.h0.0010-QICE.nc<br />
<br />
Let's try ncview again:<br />
<br />
> ncview ''case_name''.clm2.h0.0010-QICE.nc<br />
<br />
You should see a global plot of QICE on the global land grid.<br />
<br />
To magnify the plot, left-click as many times as desired on the button that says ''M X3''. To shrink the plot, right-click on this button.<br />
<br />
We can see QICE for glaciated cells not only in Greenland and Antarctica, but also in the<br />
Himalayas, Canadian archipelago, Alaskan coastal range, and Patagonia (and New Zealand!).<br />
<br />
The units of QICE are mm/s (or equivalently, kg/m<sup>2</sup>/s. If you prefer m/yr, you can change the units in the file using the ''ncflint'' command:<br />
<br />
> ncflint -w 3.16e4,0 ''case_name''.clm2.h0.0010-QICE.nc ''case_name''.clm2.h0.0010-QICE.nc ''case_name''.clm2.h0.0010-QICEmyr.nc<br />
<br />
where the factor 3.16e4 converts from mm/s to m/yr.<br />
<br />
(This syntax can be interpreted as follows. The form of the command is<br />
<br />
> ncflint -w weight1, weight2 infile1.nc infile2.nc outfile.nc<br />
<br />
with the result that variables in the output file have values outfile_var = weight1*infile1_var + weight2*infile2_var. If weight2 = 0, then infile2 is irrelevant and the effect is simply to multiply variables in infile1 by a constant. Perhaps there is a simpler way to do this. In ferret it is easy to multiply data by a constant without changing the netCDF file.) <br />
<br />
You may want to look at other data fields in the monthly mean files and in the yearly average file.<br />
<br />
As a final exercise, let's compute the surface mass balance integrated over the Greenland ice sheet. NCO has a command for this too:<br />
<br />
> ncwa -N -v QICE -a lat,lon -B 'gris_mask > 0.5' -w area -N ''case_name''.clm2.h0.0010-avg.nc ''outfile.nc'' <br />
<br />
where<br />
*''-N'' says to compute the integrated total as opposed to the average.<br />
*''-v'' tells which variable(s) to sum and/or average over<br />
*''-a'' tells which dimensions to sum over<br />
*''-B'' says to sum only over cells that meet a masking condition (in our case, Greenland cells have gris_mask = 1.0, and all other cells have gris_mask = 0.0)<br />
*''-w'' says to weight by the variable that follows (grid cell area in this case)<br />
<br />
Let's look at the output:<br />
<br />
> ncdump ''outfile.nc''<br />
<br />
We're interested in the area-integrated value of QICE. Note that area has units of km<sup>2</sup>, whereas QICE has units of mm/s. To convert to km<sup>3</sup>/yr, multiply the result by 3.16e7 (the number of seconds in a year) and divide by 1e6 (the number of mm in a km). Recall that 1 km<sup>3</sup> (liquid water equivalent) of ice weighs 1 Gigaton.<br />
<br />
For the present-day (or at least preindustrial) climate of Greenland, the net surface mass balance is ~300 to 400 km<sup>3</sup>/y. How do your results compare?<br />
<br />
===Simulation results===<br />
<br />
*Control experiment (albice = 0.5, lapse = 0.006): QICE = 320 km3/yr<br />
<br />
*Warming experiments<br />
**+2 degrees: GIS net SMB = +320 km3/yr; (actually ran with no T change :( )<br />
**+16 degrees: QICE = ? km3/yr<br />
<br />
<br />
[[CCSM_greenland_massbal | Link to table]]</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Coupling_the_Cryosphere_to_other_Earth_systems,_part_IICoupling the Cryosphere to other Earth systems, part II2009-08-14T22:38:58Z<p>Hoffman: /* Simulation results */</p>
<hr />
<div>==Back to [[Summer Modeling School]]==<br />
<br />
<br />
<br />
<br />
==Ice sheets in the Community Climate System Model==<br />
<br />
===A brief introduction to CCSM===<br />
<br />
<br />
[[Image:ccsm.jpg|thumb|right|300 px|[[Media:Lipscomb_Talk.pdf]]<br>Ice sheets in the Community Climate System Model]]<br />
<br />
The Community Climate System Model (CCSM; http://www.ccsm.ucar.edu/) is one of three U.S. global climate models (GCMs) featured prominently in the assessment reports of the Intergovernmental Panel on Climate Change (IPCC). The others are the NASA GISS model and the NOAA GFDL model. (GISS is the Goddard Institute for Space Studies in New York City, and GFDL is the Geophysical Fluid Dynamics Laboratory in Princeton, N.J.) The GISS and GFDL models have been developed primarily at those institutions, but CCSM, as the name suggests, is a broad community effort. Although model development is centered at the National Center for Atmospheric Research (NCAR) in Boulder, there have been substantial contributions from scientists at several national laboratories and numerous universities, with support from the Department of Energy (DOE) and the National Science Foundation (NSF). <br />
<br />
CCSM has a hub-and-spoke design. Recent model versions have had four physical components—atmosphere, land, ocean, and sea ice—that communicate through a coupler. The current CCSM components are the Community Atmosphere Model (CAM), the Community Land Model (CLM), the Parallel Ocean Program (POP), and the Community Ice Code (CICE). POP and CICE were developed primarily by scientists in the Climate, Ocean and Sea Ice Modeling (COSIM) group at Los Alamos National Laboratory (LANL), where I work. <br />
<br />
I have recently added the Glimmer ice sheet model as a fifth physical component, but it is not yet part of the officially released code. I’ll say more about CCSM and ice sheets below.<br />
<br />
(An historical aside: Why are ocean and ice models developed at a nuclear weapons lab in the high desert of New Mexico? The short answer is that many of the computational methods and hardware used in weapons simulations are useful for climate modeling. COSIM was founded when the Cold War was winding down and a LANL scientist named Bob Malone, who had been studying nuclear winter, decided to develop a parallel ocean model.)<br />
<br />
In principle, each physical component lives on its own grid, though in practice the atmosphere and land components usually share one horizontal grid, and the ocean and sea ice components share another. CCSM is always run in parallel, on anywhere from ~10 to ~10,000 processors. The components can be run either concurrently (all at the same time, but on different sets of processors) or sequentially (one after the other, with each component using all the available processors). <br />
<br />
Each model component sends to and receives from the coupler a number of 2D fields located at the component interfaces. These fields include upwelling and downwelling shortwave and longwave radiation, air temperature, specific humidity, pressure, wind speed, ocean velocity, sea surface temperature and salinity, sea ice concentration, surface albedo, etc. The coupler can map fields from one component domain to another (e.g., from the atmosphere grid to the ocean grid) as well as merge fields from more than one component (e.g., the area-weighted albedos from the ocean and sea ice models, which are combined into a single field for the benefit of the atmosphere). Also, the coupler may be responsible for deriving fluxes (e.g., sensible and latent heat fluxes) from other fields.<br />
<br />
CCSM is managed by a Scientific Steering Committee. There are twelve working groups that focus on different aspects of model development and application. The newest group is the Land Ice Working Group (LIWG), which is responsible for developing the CCSM ice sheet component and for using the model to predict sea-level rise caused by the loss of land ice. See here for details: http://www.ccsm.ucar.edu/working_groups/Land+Ice/<br />
<br />
The CCSM community meets once a year, usually in June in Breckenridge, Colorado. In addition, each working group holds a winter meeting, usually in Boulder. You are cordially invited to attend the next meeting of the LIWG, which will be held in conjunction with the CCSM Polar Climate Working Group and with the SeaRISE sea-level assessment group. Contact one of the LIWG co-chairs, Jesse Johnson or Bill Lipscomb, for details.<br />
<br />
===CCSM, the IPCC, and sea-level rise===<br />
<br />
Development of CCSM and other GCMs is driven largely by the IPCC timetable. The fourth assessment report, AR4, was released in 2007, and the next report, AR5, is scheduled for 2013. The final form of CCSM version 4, which will be used for AR5 simulations, was determined just a few weeks ago. The control climate simulations are under way, and climate change runs will begin shortly. Most of these runs will be completed by sometime next year. Scientists then have a year or so to analyze and publish results in time to be considered for AR5. <br />
<br />
The IPCC schedule is not always conducive to long-term model development. Also, there are concerns that the IPCC reports are too focused on obtaining consensus as opposed to exploring uncertainties. (See, e.g., Oppenheimer et al. 2007.) As a result, the reports may downplay the risks of potentially large and abrupt climate changes such as megadroughts, methane clathrate release, and sea-level rise. But at least for now, these assessments are the primary mechanism for communicating results to policymakers and the public. <br />
<br />
Global sea level is rising at a rate of ~2.5 to 3 mm/yr (i.e., 25 to 30 cm/century), with significant contributions from ocean thermal expansion as well as melting of mountain glaciers and ice sheets. Recent observations have established that the Greenland and West Antarctic ice sheets are losing mass at an accelerating rate. IPCC AR4 projected a 21st century sea-level rise of 18 to 59 cm under a broad range of greenhouse emissions scenarios. Notoriously, these projections specifically excluded the possibility of “rapid dynamical changes in ice flow” because “understanding of these effects is too limited to assess their likelihood or provide a best estimate or upper bound for sea level rise.”<br />
<br />
Since the release of AR4, there has been considerable pressure on the climate modeling centers and national funding agencies to do a better job at predicting ice-sheet retreat and sea-level rise. Until recently, most GCMs did not have dynamic ice sheets, because it was assumed that ice sheets would not contribute significantly to climate change or sea-level rise on time scales of decades to centuries. Now that this assumption has come under question, the modeling centers (or modelling centres, if you prefer) are scrambling to add ice sheet models. Both CCSM and the U.K. Hadley Centre model will be using Glimmer, with the close involvement of several of the summer school instructors. <br />
<br />
Two major community efforts are under way to assess the future ice-sheet contribution to sea-level rise and try to narrow the range of uncertainty. The European Union is supporting a large multinational effort called Ice2sea (http://www.ice2sea.eu/). Bob Bindschadler of NASA is leading a broad but less formal effort called SeaRISE (Sea-level Response to Ice Sheet Evolution; http://websrv.cs.umt.edu/isis/index.php/SeaRISE_Assessment.)<br />
<br />
===Ice sheets in CCSM===<br />
<br />
In 2005 I submitted a proposal to incorporate an ice sheet model in CCSM. After conversations with Tony Payne, Ian Rutt, and others, I decided to work with Glimmer, which had been designed specifically for coupling to climate models. I thought the coupling could be done in a year or so, which turned out to be a serious underestimate of the project complexities (or at least an overestimate of my ability to carry out a complex project). Four years later, there is still some work to do, but we finally have a version of CCSM that is more or less ready for climate simulations with dynamic ice sheets.<br />
<br />
Ian Rutt and Magnus Hagdorn described the Glimmer code in detail during the Wednesday lectures. During the past two years, Jesse Johnson, Steve Price, and others have made great strides in developing a Community Ice Sheet Model (CISM) based on Glimmer. These developments—in particular, the implementation of a higher-order momentum balance—are described in Steve’s lecture notes and on the U. Montana ice sheet web site (http://websrv.cs.umt.edu/isis/index.php/Main_Page). The new and improved model, known as Glimmer-CISM, will be incorporated in CCSM this fall. Model development is continuing under the direction of a steering committee that includes Tony Payne, Ian Rutt, and Magnus Hagdorn in the U.K., along with Jesse Johnson, Steve Price, and me in the U.S.<br />
<br />
Glimmer has been configured for coupled CCSM simulations with a dynamic Greenland ice sheet. Since there are some added difficulties in coupling a marine-based ice sheet to a GCM, we are not yet able to run coupled simulations with a dynamic Antarctic ice sheet. Ultimately, however, we plan to simulate both Greenland and Antarctica, as well as paleo ice sheets. <br />
My focus in the rest of this document will be not on Glimmer-CISM, but on changes made in CCSM to compute the surface mass balanced of ice sheets.<br />
<br />
===Simulating the surface mass balance of ice sheets===<br />
<br />
We can think of Glimmer as having two main physical components:<br />
*a surface mass balance (SMB) scheme, which computes accumulation and ablation at the upper ice/snow surface. Ablation is defined as the amount of water that runs off to the ocean. Not all the surface meltwater runs off; some of the melt percolates into the snow and refreezes.<br />
*a dynamic component, which computes ice velocities and the resulting evolution of the ice-sheet geometry and temperature fields. <br />
<br />
The dynamic component of Glimmer is called GLIDE. The surface mass balance calculations are part of GLINT, the Glimmer interface. GLINT receives the required fields from a climate model or meteorological data set, accumulates and averages the data over a specified time period, and downscales the data to the finer Glimmer grid. (The land and atmosphere models typically run at a grid resolution of ~100 km, whereas ice sheet models require a grid resolution of ~10 km.) The downscaled data is used to compute the surface mass balance, which is passed to GLIDE.<br />
<br />
There are two broad classes of surface mass balance schemes:<br />
*positive-degree-day (PDD) schemes, in which the melting is parameterized as a linear function of the number of degree-days above the freezing temperature. The proportionality factor is empirical and may vary in time and space. This factor is larger for bare ice than for snow, since ice has a lower albedo. <br />
*surface energy-balance (SEB) schemes, in which the melting depends on the sum of the radiative, turbulent, and conductive fluxes reaching the surface. SEB schemes are more physically realistic than PDD schemes, but also are more expensive and complex. <br />
<br />
Glimmer has a PDD scheme based on that of Huybrechts et al. (1991) and others. (See the Glimmer documentation for details.) PDD schemes are not ideal for climate change studies, because empirical degree-day factors could change in a warming climate. Comparisons of PDD and energy-balance schemes (e.g., van de Wal 1996; Bougamont et al. 2007) suggest that PDD schemes may be overly sensitive to warming temperatures. In fact, Bougamont et al. found that a PDD scheme generates runoff rates nearly twice as large as those computed by an SEB scheme. If we want a credible climate change simulation for the Greenland ice sheet, we should use an energy-balance scheme.<br />
<br />
Glimmer does not currently have an SEB scheme, but might include one in the future. If such a scheme were available, one approach to computing surface melting would be as follows: The incoming shortwave and longwave fluxes, temperature, and humidity would be passed from the CCSM atmosphere to GLINT via the coupler. These fields would be downscaled to the ice sheet grid, using an assumed lapse rate to interpolate temperatures to the appropriate elevations on the ice sheet grid. The surface mass balance would then be computed from the downscaled atmosphere fields combined with a detailed snow model.<br />
<br />
This approach is sensible if one is working with meteorological data, e.g. from atmospheric reanalysis data. In CCSM, however, the preferred approach is to compute the surface mass balance for ice sheets in CLM, the CCSM land component, on the coarse-resolution land grid. To improve accuracy on the coarse grid, the mass balance is computed for ~10 elevation classes in each gridcell. The mass balance for each elevation class is accumulated and averaged over a coupling interval (typically ~1 day), then passed to GLINT via the coupler. GLINT accumulates and averages the mass balance over a longer interval (typically 1 year) and downscales it to the ice sheet grid. The ice sheet evolves dynamically, then returns the new ice geometry to CLM via the coupler.<br />
<br />
====Motivation for a surface mass balance scheme in CLM====<br />
There are several advantages to computing the surface mass balance in CLM as opposed to GLINT: <br />
#It is much cheaper to compute the SMB in CLM for ~10 elevation classes than in GLINT/Glimmer. For example, suppose we are running CLM at a resolution of ~50 km and Glimmer at ~5 km. Greenland has dimensions of about 1000 x 2000 km. For CLM we would have 20 x 40 x 10 = 8,000 columns, whereas for GLIMMER we would have 200 x 400 = 80,000 columns. Jeff Ridley of the Hadley Centre has found that running an SMB model on the ice sheet grid is as expensive as the rest of the GCM combined. Ghan and others (add ref) have shown that elevation classes give results comparable to those obtained at much greater expense on a finer grid.<br />
#We take advantage of the fairly sophisticated snow physics parameterization already in CLM instead of implementing a separate scheme for GLIMMER. When the CLM scheme is improved, the improvements are applied to ice sheets automatically.<br />
#The atmosphere model can respond during runtime to ice-sheet surface changes. As shown by Pritchard et al. (2008), runtime albedo feedback from the ice sheet is critical for simulating ice-sheet retreat on paleoclimate time scales. Without this feedback, the atmosphere warms much less, and the retreat is delayed.<br />
#Mass is conserved, in that the rate of surface ice growth or melting computed in CLM is equal to the rate seen by the dynamic ice sheet model.<br />
#The improved surface mass balance is available in CLM for all glaciated grid cells (e.g., in the Alps, Rockies, Andes, and Himalayas), not just those which are part of ice sheets.<br />
<br />
====Details of the new SMB scheme====<br />
As it happens, CLM has a hierarchical data structure that makes it relatively straightforward to model glaciated regions with multiple elevation classes. In the standard version of CLM, each gridcell is partitioned into one or more of five landunit types: vegetated, lake, wetland, urban, and glacier. Each landunit consists of a user-defined number of columns, and each column has its own vertical profile of temperature and water content.<br />
<br />
I created a sixth landunit, denoted glacier_mec, where “mec” stands for “multiple elevation classes.” Glacier_mec landunits are similar to glacier landunits, except that each elevation class is represented by a separate column. By default there are 10 elevation classes in each glaciated gridcell. The upper elevation bounds (in meters) of these classes are 200, 400, 700, 1000, 1300, 1600, 2000, 2500, 3000, and 10000. <br />
<br />
The atmospheric surface temperature and specific humidity are downscaled from the mean gridcell elevation to the column elevation using a user-specified lapse rate (typically 6 deg/km). At a given time, the lower-elevation columns can undergo surface melting while columns at other elevations (including the mean) remain frozen. This results in a more accurate simulation of summer melting, which is a highly nonlinear function of air temperature. The precipitation rate and radiative fluxes are not currently downscaled, but they could be, if care were taken to preserve the cell-integrated values. At some point we would like to use a more sophisticated orographic downscaling scheme, but this would require significant recoding.<br />
<br />
Standard CLM has an unrealistic treatment of accumulation and melting on ice sheets. The snow depth is limited to a prescribed depth of 1 m liquid water equivalent, with any additional snow assumed to run off instantaneously to the ocean. Snow melting is treated in a fairly realistic fashion, with meltwater percolating downward through snow layers as long as the snow is unsaturated. Once the underlying snow is saturate, any additional meltwater runs off. When glacier ice melts, however, the meltwater is assumed to remain in place until it refreezes. In warm parts of the ice sheet, the meltwater does not refreeze, but stays in place indefinitely. <br />
<br />
In the modified CLM with glacier_mec columns, snow in excess of the prescribed maximum depth is converted to ice, contributing a positive surface mass balance to the ice sheet model. When ice melts, the meltwater is assumed to run off to the ocean, contributing a negative surface mass balance. The net SMB associated with ice formation (by conversion from snow) and melting/runoff is computed for each column, averaged over the coupling interval, and sent to the coupler. This quantity, denoted ''qice'', is then passed to GLINT, along with the surface elevation topo in each column. GLINT downscales ''qice'' to the ice sheet grid, interpolating the values in adjacent elevation classes. The units of ''qice'' are mm/s, or equivalently km/m<sup>2</sup>/s. The downscaled quantities can be multiplied by a normalization factor to conserve mass exactly. <br />
<br />
Note that the surface mass balance typically is defined as the total accumulation of ice and snow, minus the total ablation. The ''qice'' flux passed to GLINT is the mass balance for ice alone, not snow. We can think of CLM as owning the snow, whereas Glimmer owns the underlying ice; hence Glimmer only needs to be told when the ice volume changes. The snow depth can fluctuate between 0 and 1 m LWE without Glimmer needing to know about it.<br />
<br />
In addition to ''qice'' and topo, the ground surface temperature tsfc is passed from CLM to GLINT via the coupler. This temperature serves as the upper boundary condition for Glimmer’s temperature calculation.<br />
<br />
Given the SMB from the land model, Glimmer executes one or more dynamic time steps and returns the new ice sheet geometry to CLM via the coupler. The fields passed to the coupler are the ice sheet fractional area, surface elevation, and thickness, along with the conductive heat flux at the top surface and the runoff flux from basal melting and iceberg calving. GLINT upscales these fields from the ice sheet grid to the coarser land grid and bins them into elevation classes before sending them to the coupler. <br />
<br />
The current coupling is one-way only. That is, CLM sends the SMB and surface temperature to GLINT but does not do anything with the fields that are returned. This is permissible for century-scale runs in which the geometry changes are modest. In order to do longer runs with large geometry changes, we need to enable two-way coupling. That work is in progress.<br />
<br />
The purpose of the surface mass balance scheme is to provide Glimmer with a realistic upper surface boundary condition in past, present, and future climates. To the extent the present-day SMB is inaccurate (because of atmospheric biases, incomplete land model physics, or downscaling errors), the present-day ice sheet will have the wrong geometry, even if the ice sheet model is perfect. The greater the inaccuracy, the less confidence we will have in future projections. <br />
<br />
So what is the quality of the results from the SMB scheme? Only recently have we had a working ice-sheet SMB scheme in CCSM4, so we are just beginning to find out. We will explore that question in the lab exercise.<br />
<br />
===Future ice sheet modeling===<br />
<br />
We have a simple working model of ice sheets in CCSM, but there is still a great deal of work to do. Here are a few of the projects under way:<br />
<br />
*Glimmer-CISM was recently moved to a Subversion repository hosted by the BerliOS Open Source Mediator, as described by Magnus Hagdorn in his lecture. (See http://developer.berlios.de/projects/glimmer-cism/.) Model development is likely to proceed quickly during the next few years.<br />
*The LANL ice sheet modeling group has received funding to develop a parallel version of Glimmer using state-of-the-art solver packages (e.g., PETSc and Trilinos) to efficiently solve the higher-order flow equations.<br />
*DOE recently initiated a three-year project on computational advances in ice sheet modeling. Several groups have been funded to develop efficient, scalable solvers for higher-order approximations as well as the full-Stokes equations on unstructured and/or adaptive grids.<br />
*We will attempt to couple WRF, a regional atmosphere model to CLM and Glimmer in the CCSM framework. WRF can be run over Greenland or Antarctica with horizontal grid resolution of ~25 km, providing more realistic forcing fields than we can get from CAM at ~100 km. <br />
*Several researchers, including a LANL group using Glimmer-CISM, are developing methods for coupling ice sheet models to ocean circulation models. The major challenges include (1) modifying the ocean upper boundary condition so that water can circulate beneath ice shelves, (2) changing the ocean topography as ice shelves advance and retreat, and (3) simulating realistic migration of the grounding line, which will require very fine grid resolution and/or improved numerical methods.<br />
*A suite of climate change experiments using CCSM with dynamic ice sheets will be run during the next two years in preparation for IPCC AR5. Initially we will use the shallow-ice version of Glimmer, but we will transition to a higher-order code when an efficient parallel version is available.<br />
<br />
These are just a few examples; many other projects are in the works. The next several years will be a time of rapid transition. Ice sheet models have long been less sophisticated than other climate model components, but Glimmer-CISM will likely be among the first climate model components to incorporate state-of-the-art meshing tools and scalable solvers. Atmosphere and ocean modelers may then look to ice sheet modelers for guidance instead of the other way around.<br />
<br />
===References===<br />
<br />
*Bougamont, M., Bamber, J.L., Ridley, J.K., Gladstone, R.M., Greuell, W., Hanna, E., Payne, A.J. and Rutt, I. 2007. Impact of model physics on estimating the surface mass balance of the Greenland ice sheet. Geophysical Research Letters 34: 10.1029/2007GL030700.<br />
*Ghan, S.J., Shippert, T. and J. Fox, 2006. Physically based global downscaling: Regional evaluation. J. Climate 19: 429-445.<br />
*Huybrechts, P., Letreguilly, A. and Reeh, N., 1990. The Greenland ice sheet and greenhouse warming. Palaeogeogr., Palaeoclimatol, Palaeoecol. (Global Planet. Change Sect.) 89: 399-412.<br />
*Oppenheimer, M., O'Neill, B.C., Webster, M., and Agrawala, S., 2007. Climate change: The limits of consensus. Science 317 (5844): 1505.<br />
*Pritchard, M. S., A. B. G. Bush, and S. J. Marshall, 2008. Neglecting ice-atmosphere interactions underestimates ice sheet melt in millennial-scale deglaciation simulations. Geophys. Res. Lett. 35, L01503, doi:10.1029/2007GL031738.<br />
*van de Wal, R.S.W. 1996. Mass-balance modeling of the Greenland ice sheet: A comparison of an energy-balance and a degree-day model. Annals of Glaciology 23: 36-45.<br />
<br />
<br />
<br />
==Lab exercise: Running CCSM==<br />
<br />
<br />
===Checkout, create case, configure, compile, and run the code===<br />
<br />
====Log onto bluefire====<br />
<br />
Open a terminal window (Accessories -> Terminal)<br />
<br />
> ssh -X -l ''logon_name'' bluefire.ucar.edu<br />
<br />
When prompted for a Token Response, enter your Cryptocard password.<br />
<br />
When asked for a terminal type, you can simply hit ''Return''.<br />
<br />
Hopefully you're now on bluefire. To see the contents of your home directory:<br />
<br />
> ls -a<br />
<br />
====Check out the code====<br />
<br />
CCSM code is maintained on a Subversion repository. For CCSM as a whole and for each component, there is a main trunk along with many development branches. We will check out code from a branch with up-to-date versions of Glimmer and the land component, CLM, along with compatible versions of the other model components. This combination of CCSM components is identified by a unique branch tag.<br />
<br />
To get the appropriate tagged version of CCSM from the Subversion repository:<br />
<br />
> svn co https://svn-ccsm-models.cgd.ucar.edu/clm2/branch_tags/glcec_tags/glcec02_clm3_6_16/<br />
<br />
For more info on how to use Subversion, see http://subversion.tigris.org<br />
<br />
The first time you do this, you'll need to enter your SVN password. (Summer school students may not have been given passwords. In that case we can direct you to a tarball instead.)<br />
<br />
The work around for not having a password to the svn server is to:<br />
<br />
cp /blhome/lipscomb/summer_school_directory/glcec02_clm3_6_16.tar .<br />
<br />
and then untar the directory<br />
<br />
tar xvf glcec02_clm3_6_16.tar<br />
<br />
You will also have to change the permissions in this archive. Use<br />
<br />
chmod -R +w glcec02_clm3_6_16/<br />
<br />
====Create a case====<br />
<br />
> ls<br />
<br />
> cd ''tag_name''/scripts<br />
<br />
For information about how to create a case, see here:<br />
<br />
> less README_quickstart !! NOTE: you don't need to follow these instructions <br />
!! ... follow the wiki instructions below instead ...<br />
<br />
The case we will run is created as follows:<br />
<br />
> create_newcase -case ''case_name'' -res 1.9x2.5_gx1v5 -compset IG -mach bluefire -skip_rundb<br />
<br />
(NOTE: in the "1.9x2.5_gx1v5" portion of the above, the "gx1v5" contains a number "one", not a small letter "L")<br />
<br />
where <br />
<br />
*''case_name'' is something you make up--long enough to be descriptive but not too long to type repeatedly.<br />
<br />
*''res'' = resolution <br />
**1.9x2.5 = 1.9x2.5 degree grid for atmosphere, land<br />
**0.9x1.25 = 0.9x1.25 degree grid for atmosphere, land<br />
**T31 = spectral T31 grid for atmosphere, land (good for debugging)<br />
**gx1v5 = 1 degree grid, version 5 for ocean, sea ice <br />
**gx3v5 = 3 degree grid, version 5 for ocean, sea ice (good for debugging)<br />
<br />
*''compset'' = set of active physical components<br />
**A: all data models; no active physical components<br />
**AG: active ice sheet<br />
**I: active land<br />
**IG: active land, ice sheet<br />
**B: active land, atmosphere, ocean, sea ice<br />
**BG: active land, atmosphere, ocean, sea ice, ice sheet<br />
<br />
*''mach'' = name of computer<br />
<br />
*''skip_rundb'' means that this is just a practice case that will not be documented in the run database.<br />
<br />
For the IG case, you will have an active land component (CLM) and ice sheet component (Glimmer). The other components will be data models. The atmospheric data is from an NCEP reanalysis at T62 resolution (~1.5 deg).<br />
<br />
====Configure the code====<br />
<br />
> cd ''case_name''<br />
<br />
Edit env_conf.xml and env_mach_pes if appropriate. (We won't need to do this for our example.)<br />
<br />
> configure -case<br />
<br />
Tour the code: <br />
<br />
> cd ~/''tag_name''/models<br />
<br />
> ls<br />
<br />
Explore from there:<br />
*atm = atmosphere<br />
*ocn = ocean<br />
*lnd = lnd<br />
*ice = sea ice<br />
*glc = ice sheet (Glimmer-CISM)<br />
*drv = driver (includes coupler modules)<br />
*csm_share = shared code<br />
*utils = utilities<br />
<br />
====Build the code====<br />
<br />
Look at your environment variables:<br />
<br />
> env<br />
<br />
TMPDIR should be set to /ptmp/$LOGNAME.<br />
This is scratch space where the code is built and output files are written.<br />
<br />
> cd ~/''tag_name''/scripts/''case_name''<br />
<br />
Edit env_build.xml if appropriate. (We won't need to do this.)<br />
<br />
To build the code:<br />
<br />
> ''case_name''.bluefire.build<br />
<br />
This will take a few minutes the first time. If you rebuild later after making minor changes, it will go much faster. <br />
<br />
Hopefully the code will build. If not, you will get an error message pointing you to a build log file.<br />
<br />
To see where the code has been built:<br />
<br />
> cd /ptmp/''logon_name''/''case_name''<br />
<br />
====Run the code====<br />
<br />
> cd ~/''tag_name''/scripts/''case_name''<br />
<br />
Edit env_run.xml if appropriate (e.g., STOPN and STOP_OPTION to set length of run)<br />
<br />
*By default, STOP_OPTION = ''ndays'' and STOPN = 5. This means the code will run for 5 days--just long enough to make sure nothing is seriously broken. <br />
<br />
Edit ''case_name''.bluefire.run as appropriate<br />
<br />
BSUB commands:<br />
*-n {{pad|4em}} Number of processors (do not change this)<br />
*-q {{pad|4em}} Run queue (premium is faster than regular but costs more)<br />
*-W {{pad|4em}} Run time requested (shorter => job will start sooner)<br />
*-P {{pad|4em}} Project code<br />
<br />
For a 5-day run, we can set the run time to a small value (e.g. 0:05, or 5 minutes) so that the job runs quickly.<br />
<br />
Set the queue to ''''premium''''.<br />
<br />
Our project code is 38481000. If this code is not already in the run script, you'll need to enter it manually.<br />
<br />
To submit the job:<br />
<br />
> bsub < ''case_name''.bluefire.run<br />
<br />
To see whether the job is pending or running:<br />
<br />
> bjobs <br />
<br />
'No unfinished job found' means you're done<br />
<br />
If all goes well, the job will start and finish in a few minutes, and you will have some log files. First take a look at the poe.stdout file:<br />
<br />
> less poe.stdout.''6digits''<br />
<br />
The end of the file should say 'normal exit'.<br />
<br />
Now let's check the log files:<br />
<br />
> cd logs<br />
<br />
There should be several files with the suffix ''gz'', meaning that the files have been compressed, or zipped. Unzip the ''lnd.log'' file and take a look:<br />
<br />
> gunzip lnd.log.''timestamp''.gz<br />
> less lnd.log.''timestamp''<br />
<br />
For an IG case, the coupler, land, atmosphere, and ice sheet components (cpl, lnd, atm, and glc, respectively) have log files with diagnostic output. The logfile with the ''ccsm'' prefix combines diagnostics from each component.<br />
<br />
====Modify the code====<br />
<br />
Now that we know the basics, let's try a 10-year simulation. First, move back to the main directory for your model instance, <br />
<br />
> cd ~/tag_name/scripts/case_name <br />
<br />
In env_run.xml, set STOP_OPTION = nyear and STOP_N = 10. This will take a couple of hours to run, so we should change the run time estimate in ''case_name''.bluefire.run (flag "-W"), from 0:05 to 2:00.<br />
<br />
The code you checked out from the repository has the standard CLM values for bare ice albedo, which are too high. You should replace these with more realistic values. Edit this file:<br />
<br />
> ~/''tag_name''/models/lnd/clm/src/main/clm_varcon.F90<br />
<br />
Look for these lines:<br />
<br />
data (albice(i),i=1,numrad) /0.80_r8, 0.55_r8/<br />
!! data (albice(i),i=1,numrad) /0.50_r8, 0.50_r8/<br />
<br />
Comment out the first line and uncomment the second line.<br />
<br />
Then return to your case directory and rebuild the code:<br />
<br />
> cd ~/''tag_name''/scripts/''case_name''<br />
> ''case_name''.bluefire.build<br />
<br />
Now we'll test the sensitivity of the ice sheet surface mass balance to changes in physical parameters and the input forcing. Each group will do its own run. When you're ready to do this, please let one of us know, and we'll assign an experiment written on the board. Here are some suggestions:<br />
<br />
#Run with a different value of the bare ice albedo, ''albice''. This variable is set in ~/''tag_name''/models/lnd/clm/src/main/clm_varcon.F90. Copy this file to ~/''tag_name''/scripts/''case_name''/SourceMods/src.clm. Edit the file in the SourceMods directory; this file will automatically rewrite the original file when the code is built. Using the SourceMods directories is a good way to keep your changes separate from the base code.<br />
#Run with a different value of the surface temperature lapse rate, ''lapse_glcmec''. This variable is also set in clm_varcon.F90.<br />
#Impose a uniform temperature perturbation. You can do this by modifying ~/''tag_name''/models/lnd/clm/src/biogeophys/DriverInitMod.F90, where the temperature is downscaled. Copy the file to the SourceMods directory and edit it there. Find this line of code:<br />
<br />
tbot_c = tbot_g-lapse_glcmec*(hsurf_c-hsurf_g) ! sfc temp for column<br />
<br />
Change it to something like this:<br />
<br />
tbot_c = tbot_g-lapse_glcmec*(hsurf_c-hsurf_g) + 1.0_r8 ! sfc temp for column, plus one degree<br />
<br />
You now have a crude version of a global warming simulation.<br />
<br />
Once you've made your code changes in SourceMods, run the build script again:<br />
<br />
> cd ~/''tag_name''/scripts/''case_name''<br />
> ''case_name''.bluefire.build<br />
<br />
If you get an error message, then edit the module appropriately and try again. If the code builds, then you're read to run:<br />
<br />
> bsub < ''case_name''.bluefire.run<br />
<br />
We'll come back later to look at some results.<br />
<br />
===View the results===<br />
<br />
To see output from your run:<br />
<br />
> cd /ptmp/''logon_name''/archive/''case_name''<br />
> ls<br />
> cd lnd<br />
> ls<br />
> cd hist<br />
> ls<br />
<br />
You should have a history file for each month of your run.<br />
<br />
Let's say we're interested in the surface mass balance of glaciated gridcells from year 10 of the run, averaged over 12 months.<br />
<br />
We can post-process the data using NCO, a suite of programs for useful manipulation of netCDF files. For details, see http://nco.sourceforge.net/.<br />
<br />
To average all the history variables over 12 months, use the ''ncra'' command:<br />
<br />
> ncra -n 12,2,1 ''infile.nc'' ''outfile.nc''<br />
<br />
The ''-n'' command tells NCO to average over files that have the same name as infile.nc, apart from a numerical file identifier.<br />
<br />
* The '12' indicates that there are 12 files to average.<br />
* The '2' says that the identifier has 2 digits (01, 02, ..., 12)<br />
* The '1' says that the identifier changes with a stride of 1.<br />
<br />
The outfile name is arbitrary. In our case, we can type:<br />
<br />
> ncra -n 12,2,1 ''case_name''.clm2.h0.0010-01.nc ''case_name''.clm2.h0.0010-avg.nc<br />
> ls<br />
<br />
To view the contents of the new file:<br />
<br />
> ncdump -h ''case_name''.clm2.h0.0010-avg.nc | less<br />
<br />
Hit the space bar to scroll through the output.<br />
<br />
Note the following:<br />
<br />
*The grid dimensions are 96 x 144.<br />
*There are some time-independent variables (e.g., area, topo) with lower-case names.<br />
*There are many time-dependent variables (including QICE, the surface mass balance) with names in all caps.<br />
<br />
Now we can plot the data.<br />
<br />
My favorite netCDF viewer is ferret, but ferret is not installed on bluefire.<br />
<br />
Let's use ncview instead:<br />
<br />
> ncview ''case_name''.clm2.h0.0010-avg.nc<br />
<br />
The ncview GUI should pop up. Click on the ''2d vars'' button.<br />
<br />
If the ncview GUI does not pop up, that's probably because your path isn't set up to find it. Try this instead:<br />
<br />
> /contrib/bin/ncview ''case_name''.clm2.h0.0010-avg.nc<br />
<br />
As a shortcut, you can add an alias in your .cshrc file in your home directory.<br />
<br />
alias ncview '/contrib/bin/ncview'<br />
<br />
After you save the new version of .cshrc, you will need to type this:<br />
<br />
> source .cshrc<br />
<br />
Then the alias should work.<br />
<br />
Unfortunately, there are so many variables that we can't get to QICE (one of the limitation of ncview).<br />
<br />
Let's make a file that doesn't have so many variables:<br />
<br />
> ncra -v QICE -n 12,2,1 ''case_name''.clm2.h0.0010-01.nc ''case_name''.clm2.h0.0010-QICE.nc<br />
<br />
Here we have used the ''-v'' option to specify the variables to average over.<br />
<br />
The resulting file has just one time-dependent variable QICE, a function of lat, lon and time:<br />
<br />
> ncdump -h ''case_name''.clm2.h0.0010-QICE.nc<br />
<br />
Let's try ncview again:<br />
<br />
> ncview ''case_name''.clm2.h0.0010-QICE.nc<br />
<br />
You should see a global plot of QICE on the global land grid.<br />
<br />
To magnify the plot, left-click as many times as desired on the button that says ''M X3''. To shrink the plot, right-click on this button.<br />
<br />
We can see QICE for glaciated cells not only in Greenland and Antarctica, but also in the<br />
Himalayas, Canadian archipelago, Alaskan coastal range, and Patagonia (and New Zealand!).<br />
<br />
The units of QICE are mm/s (or equivalently, kg/m<sup>2</sup>/s. If you prefer m/yr, you can change the units in the file using the ''ncflint'' command:<br />
<br />
> ncflint -w 3.16e4,0 ''case_name''.clm2.h0.0010-QICE.nc ''case_name''.clm2.h0.0010-QICE.nc ''case_name''.clm2.h0.0010-QICEmyr.nc<br />
<br />
where the factor 3.16e4 converts from mm/s to m/yr.<br />
<br />
(This syntax can be interpreted as follows. The form of the command is<br />
<br />
> ncflint -w weight1, weight2 infile1.nc infile2.nc outfile.nc<br />
<br />
with the result that variables in the output file have values outfile_var = weight1*infile1_var + weight2*infile2_var. If weight2 = 0, then infile2 is irrelevant and the effect is simply to multiply variables in infile1 by a constant. Perhaps there is a simpler way to do this. In ferret it is easy to multiply data by a constant without changing the netCDF file.) <br />
<br />
You may want to look at other data fields in the monthly mean files and in the yearly average file.<br />
<br />
As a final exercise, let's compute the surface mass balance integrated over the Greenland ice sheet. NCO has a command for this too:<br />
<br />
> ncwa -N -v QICE -a lat,lon -B 'gris_mask > 0.5' -w area -N ''case_name''.clm2.h0.0010-avg.nc ''outfile.nc'' <br />
<br />
where<br />
*''-N'' says to compute the integrated total as opposed to the average.<br />
*''-v'' tells which variable(s) to sum and/or average over<br />
*''-a'' tells which dimensions to sum over<br />
*''-B'' says to sum only over cells that meet a masking condition (in our case, Greenland cells have gris_mask = 1.0, and all other cells have gris_mask = 0.0)<br />
*''-w'' says to weight by the variable that follows (grid cell area in this case)<br />
<br />
Let's look at the output:<br />
<br />
> ncdump ''outfile.nc''<br />
<br />
We're interested in the area-integrated value of QICE. Note that area has units of km<sup>2</sup>, whereas QICE has units of mm/s. To convert to km<sup>3</sup>/yr, multiply the result by 3.16e7 (the number of seconds in a year) and divide by 1e6 (the number of mm in a km). Recall that 1 km<sup>3</sup> (liquid water equivalent) of ice weighs 1 Gigaton.<br />
<br />
For the present-day (or at least preindustrial) climate of Greenland, the net surface mass balance is ~300 to 400 km<sup>3</sup>/y. How do your results compare?<br />
<br />
===Simulation results===<br />
<br />
*Control experiment (albice = 0.5, lapse = 0.006): QICE = 320 km3/yr<br />
<br />
*Warming experiments<br />
**+2 degrees: GIS net SMB = +320 km3/yr; <br />
**+16 degrees: QICE = ? km3/yr<br />
<br />
<br />
[[CCSM_greenland_massbal | Link to table]]</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/PDX_afterhoursPDX afterhours2009-08-14T00:04:11Z<p>Hoffman: added the thing</p>
<hr />
<div>[[Image:Erin_adam.JPG|thumb|right|300px|The PDX Afterhours king & queen in Tube Bar, home of Wednesday night $1 Miller High Life.]]<br />
<br />
==tuesday Aug 4, 2009==<br />
* meet at [http://paccinirestaurant.com/ Paccini's] pub at 7:30<br />
<br />
==wednesday Aug 5, 2009==<br />
* after FORTRAN session, leave PSU to go to [http://maps.google.com/maps?hl=en&client=firefox-a&rls=com.ubuntu:en-US:unofficial&hs=zuU&um=1&ie=UTF-8&q=sushi+ichiban+portland&fb=1&split=1&gl=us&view=text&latlng=899833397715679289 Sushi Ichiban]. Adam Campbell will guide you.<br />
* go to [http://www.groundkontrol.com Ground Kontrol] a retro arcade with beer<br />
* go to [http://www.voodoodoughnut.com Voodoo Doughnut], please someone buy Ian Rutt the $5 doughnut.<br />
<br />
<br />
==thursday Aug 6, 2009==<br />
[http://amontobin.com/field/ Amon Tobin] and [http://www.pitchblack.co.nz/?s1=index Pitch Black], along with two opening bands, are playing at the Roseland Theatre (8 NW 6th Ave) Thursday night (starting at 9:00 pm or thereabouts). Tickets are $26 (available online at [http://ticketswest.rdln.com/Venue.aspx?ven=ROS TicketsWest]). The music is best described as sampled electronica (Amon Tobin) and Kiwi-style dub (Pitch Black). I'd expect a late night of electronic music: an evening nap may be in order! Jeremy's already got his ticket and can fill you in with more, including a sample of the music.<br />
<br />
*other local music recommendations from Adam<br />
<br />
'''Boy Eats Drum Machine, French Miami, Southern Belle and Electric Opera Company''' - Indie Rock -<br />
Thu., Aug. 6, 9 p.m.<br />
$6-8<br />
Berbati's Pan<br />
10 SW 3rd Ave.<br />
Downtown<br />
<br />
'''Nurses, Inside Voices and Slaves''' - Indie Rock -<br />
Thu., Aug. 6, 8:30 p.m.<br />
$7<br />
Holocene <br />
1001 SE Morrison<br />
Southeast<br />
<br />
==friday Aug 7, 2009==<br />
[http://www.biteoforegon.com/ The Bite of Oregon] is a food festival that takes place at Tom McCall Waterfront Park. Featuring food, wine, beer and entertainment from Oregon. Entry is $8, food and beverages are extra.<br />
<br />
[http://www.pioneercourthousesquare.org/calendar_august.htm Flicks on the Bricks] will be showing Jurassic Park at dusk outside at Pioneer Courthouse Square, FREE (including popcorn). (10 minute walk)<br />
<br />
[http://www.portlandtwilight.com/ Portland Downtown Twilight Bike Criterium]: Professional (by U.S. standards) bike race through downtown Portland. Complete with beer garden, food, and an expected 15,000 spectators. Pro race starts at 7:30.<br />
<br />
==saturday Aug 8, 2009==<br />
[http://www.biteoforegon.com/ The Bite of Oregon] is a food festival that takes place at Tom McCall Waterfront Park. Featuring food, wine, beer and entertainment from Oregon. Entry is $8, food and beverages are extra.<br />
<br />
==sunday Aug 9, 2009==<br />
The [http://providence.org/bridgepedal/ Portland Bridge Pedal] is a fun event where you can go on 14, 24, or 37 mi. bike ride over Portland's Bridges. Adam is trying to assemble a group to go on Sunday morning. Please speak with him if you are interested in going to this. I tentatively have 4 bikes I can get ahold of.<br />
<br />
RIDERS: Register [http://providence.org/bridgepedal/ here]. Make sure to click the team rider option and the 11 bridge option! Ok so here is the deal I tried to register a team:<br />
Team Name The Icepocalypse<br />
Team password BPT943<br />
<br />
HEY It works now!<br />
<br />
[http://www.biteoforegon.com/ The Bite of Oregon] is a food festival that takes place at Tom McCall Waterfront Park. Featuring food, wine, beer and entertainment from Oregon. Entry is $8, food and beverages are extra.<br />
<br />
==tuesday Aug 11, 2009==<br />
[http://www.rootsorganicbrewing.com/ Roots Brewery] has a $2.50 imperial pint night. The beer is great! It's on the eastside but it's not very far. There are carts and more beer nearby.<br />
<br />
==wednesday Aug 12, 2009==<br />
[[Image:420.png|thumb|right|300px|The Glimmer model output gives suggestions for illicit afterhours activities in Portland. I suggest looking in the North park blocks.]]<br />
<br />
Quiet night. Kristin ate a Whole Foods Meal Pod in her room and Toby went for pho by himself. Good work!<br />
<br />
==Thursday Aug 13, 2009==<br />
We attempted to make an expedition to the movie theater last Tuesday but I forgot my ID (:D). Tonight, we will make another try... So let's meet at the hotel at 6:30 pm. Don't forget to bring your ID!!!!!<br />
<br />
[http://www.freedomridersthemovie.com/ Freedom Riders] is a mountain bike documentary about the evolution of freeriding near Jackson Hole, WY (I think). The show is at the [http://www.clintonsttheater.com/ Clinton St. Theater]. It starts at 7 pm.<br />
<br />
==Friday and Saturday==<br />
The Thing at [http://5thavenuecinema.org/ Fifth Avenue Cinema]. Showtimes: FRI. / SAT. - 7:00 P.M. & 9:30 P.M SUN. - 3:00 P.M.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Adding_dH/dt_moduleAdding dH/dt module2009-08-13T21:55:55Z<p>Hoffman: /* Step 1 */</p>
<hr />
<div>==Overview==<br />
<br />
This page contains step-by-step instructions for adding a new module to Glimmer-CISM. In this case, the module is a first-order, upwinding advection scheme for mass transport (dH/dt) using velocities calculated from a higher-order dynamics model. The procedure, however, is generic and could apply to adding almost any module. The goal here is not to get lost in the details of the upwinding scheme (we'll provide the code for that later) but rather to understand the steps for adding a module in an incremental, structured way that allows you to track small errors and fix them at each step. <br />
<br />
*To go directly to some exercises using the higher-order dynamics and the evolution scheme, go to [[Ice Sheet Evolution Experiments]].<br />
*For mathematical background on upwinded transport, consult the [[Solving the equation for thickness evolution]] pages.<br />
<br />
<br />
== '''Step 0''' ==<br />
<br />
We first need to download the code, configure it, and check that we can get a successful initial build (presumably we've done that already if you've been able to do the higher-order test suite exercises). If not, then type the following from within the highest level directory containing the source code:<br />
<br />
./bootstrap<br />
./configure --with-netcdf=/path/to_netcdf !!! Note: specify your own path to netCDF libs here !!!<br />
make<br />
<br />
== '''Step 1''' ==<br />
<br />
In step 1, we will create a module containing "empty" subroutines and add this module to the build.<br />
<br />
Create a module '''fo_upwind_advect.F90''' that contains the necessary subroutines. For now, these will just be "stubs", which we will fill in later.<br />
<br />
<source lang=fortran>module fo_upwind_advect<br />
<br />
! subroutines for mass advection scheme based on 1st order upwinding<br />
<br />
contains<br />
<br />
!----------------------------------------------------------------------<br />
<br />
subroutine fo_upwind_advect_init( )<br />
<br />
! initialization for 1st-order upwinding mass advection<br />
<br />
end subroutine fo_upwind_advect_init<br />
<br />
!----------------------------------------------------------------------<br />
<br />
subroutine fo_upwind_advect_final( )<br />
<br />
! finalization for 1st-order upwinding mass advection<br />
<br />
end subroutine fo_upwind_advect_final<br />
<br />
!----------------------------------------------------------------------<br />
<br />
subroutine fo_upwind_advect_driver( )<br />
<br />
! driver for 1st-order upwind mass advection<br />
<br />
end subroutine fo_upwind_advect_driver<br />
<br />
!----------------------------------------------------------------------<br />
<br />
subroutine fo_upwind_advect_main( )<br />
<br />
! 1st-order upwinding mass advection<br />
<br />
end subroutine fo_upwind_advect_main<br />
<br />
!----------------------------------------------------------------------<br />
<br />
end module fo_upwind_advect</source><br />
<br />
<br />
Add this new module to the build by editing '''Makefile.am''' in '''src/fortran/''' ...<br />
<br />
<source lang=fortran>libglide_a_SOURCES = glide.F90 glide_setup.F90 glide_types.F90 glide_temp.F90 \<br />
glide_bwater.F90 glide_deriv.F90 xls.F90 ice3d_lib.F90 \<br />
glide_velo_higher.F90 glide_thck.F90 glide_velo.F90 \<br />
glide_mask.F90 glide_stop.F90 glide_io.F90 \<br />
glide_nc_custom.F90 isostasy.F90 isostasy_el.F90\<br />
isostasy_setup.F90 isostasy_types.F90 glide_lithot.F90\<br />
glide_lithot3d.F90 glide_lithot1d.F90 glide_profile.F90\<br />
glide_diagnostics.F90 glissade.F90 glissade_remap.F90\<br />
glissade_velo.F90 glissade_constants.F90 glide_vertint.F90\<br />
glide_thckmask.F90 glide_nonlin.F90 glide_grids.F90\<br />
glam.F90 glam_strs2.F90 glam_thck_ppm.F90\<br />
remap_advection.F90 remap_glamutils.F90 glide_ground.F90</source><br />
<br />
becomes ...<br />
<br />
<source lang=fortran>libglide_a_SOURCES = glide.F90 glide_setup.F90 glide_types.F90 glide_temp.F90 \<br />
glide_bwater.F90 glide_deriv.F90 xls.F90 ice3d_lib.F90 \<br />
glide_velo_higher.F90 glide_thck.F90 glide_velo.F90 \<br />
glide_mask.F90 glide_stop.F90 glide_io.F90 \<br />
glide_nc_custom.F90 isostasy.F90 isostasy_el.F90\<br />
isostasy_setup.F90 isostasy_types.F90 glide_lithot.F90\<br />
glide_lithot3d.F90 glide_lithot1d.F90 glide_profile.F90\<br />
glide_diagnostics.F90 glissade.F90 glissade_remap.F90\<br />
glissade_velo.F90 glissade_constants.F90 glide_vertint.F90\<br />
glide_thckmask.F90 glide_nonlin.F90 glide_grids.F90\<br />
glam.F90 glam_strs2.F90 glam_thck_ppm.F90\<br />
remap_advection.F90 remap_glamutils.F90 glide_ground.F90 \ ! CHANGED LAST TWO LINES<br />
fo_upwind_advect.F90</source><br />
<br />
[Note: the comment in the 2nd to last line may cause a problem for make.]<br />
<br />
Now we stop adding/editing things for a minute and type '''make''' to rebuild and make sure we haven't broken the code yet. If you get a successful build, then proceed to '''Step 2'''.<br />
<br />
== '''Step 2''' ==<br />
<br />
In step 2, we will do some minor updates to other parts of the code so that there is an option to call our 1st order upwinding scheme when we want to evolve the ice thickness (that is, rather than using some other scheme).<br />
<br />
In '''glide_types.F90''', add the following to the transport scheme options around line 100 ...<br />
<br />
<source lang=fortran> integer, parameter :: EVOL_PSEUDO_DIFF = 0<br />
integer, parameter :: EVOL_ADI = 1<br />
integer, parameter :: EVOL_DIFFUSION = 2<br />
integer, parameter :: EVOL_INC_REMAP = 3<br />
integer, parameter :: EVOL_FO_UPWIND = 4 ! ADDED THIS LINE</source><br />
<br />
... similarly, update some lines around 207 for documentation ...<br />
<br />
<source lang=fortran> !*FD Thickness evolution method:<br />
!*FD \begin{description}<br />
!*FD \item[0] Pseudo-diffusion approach <br />
!*FD \item[2] Diffusion approach (also calculates velocities)<br />
!*FD \item[3] Incremental remapping<br />
!*FD \item[4] 1st-order upwinding scheme ! ADDED THIS LINE<br />
!*FD \end{description}</source><br />
<br />
In '''glide_setup.F90''', around line 475, change ...<br />
<br />
<source lang=fortran> character(len=*), dimension(0:4), parameter :: evolution = (/ & ! CHANGED 0:3 TO 0:4<br />
'pseudo-diffusion ', &<br />
'ADI scheme ', &<br />
'iterated diffusion ', &<br />
'remap thickness ', & <br />
'1st order upwind ' /) ! ADDED THIS LINE</source><br />
<br />
Time to type '''make''' again and check for a successful build. If so, proceed to '''Step 3'''.<br />
<br />
<br />
== '''Step 3''' ==<br />
<br />
Now we will add use statements, if constructs, and dummy calls to the advection subroutines from the appropriate places in the code. Hence, we are setting up all the necessary structure to call the subroutines but are not passing any arguments to them yet (nor are they actually doing anything yet).<br />
<br />
In '''glide.F90''', add the following use statement to subroutine '''glide_initialise''' ...<br />
<br />
<source lang=fortran> use glam_strs2, only : glam_velo_fordsiapstr_init<br />
use remap_glamutils, only : horizontal_remap_init<br />
use fo_upwind_advect, only : fo_upwind_advect_init ! ADDED<br />
</source><br />
<br />
Now add if construct and call to the initialization stub in '''fo_upwind_advect_mod.F90''', around line 190 of '''glide.F90''', just after the if construct for EVOL_INC_REMAP ...<br />
<br />
<source lang=fortran> if (model%options%whichevol== EVOL_FO_UPWIND ) then<br />
call fo_upwind_advect_init( )<br />
end if</source><br />
<br />
In '''glide_stop.F90''', add the following to the use statements ...<br />
<br />
<source lang=fortran>module glide_stop<br />
<br />
use glide_types<br />
use glimmer_log<br />
use remap_glamutils<br />
use fo_upwind_advect, only : fo_upwind_advect_final ! ADDED</source><br />
<br />
Add the necessary if construct to '''glide_stop.F90''' around line 150, as we did with the initialization routine ...<br />
<br />
<source lang=fortran> if (model%options%whichevol== EVOL_FO_UPWIND ) then<br />
call fo_upwind_advect_final( )<br />
endif</source><br />
<br />
Finally, update the use statements in '''glide.F90''' around line 340, and the case construct for ice sheet evolution, around line 380, so that the 1st order upwinding subroutines can be called ...<br />
<br />
<source lang=fortran> use glide_thck<br />
use glide_velo<br />
use glide_ground<br />
use glide_setup<br />
use glide_temp<br />
use glide_mask<br />
use isostasy<br />
use glam, only: inc_remap_driver<br />
use fo_upwind_advect, only: fo_upwind_advect_driver ! ADDED </source><br />
<br />
... and add a call to the driver routine to the case construct (passing no args yet) ...<br />
<br />
<source lang=fortran> case(EVOL_FO_UPWIND) ! Use first order upwind scheme for mass transport<br />
<br />
call fo_upwind_advect_driver( )</source><br />
<br />
Make sure to place it BEFORE the existing "end select" statement!<br />
<br />
Type '''make''' again and check for a successful build. If so, proceed to '''Step 4'''.<br />
<br />
== '''Step 4''' ==<br />
<br />
At this point we've done nothing but build the necessary structures in the code. Now we will start filling in the subroutines with the necessary variable definitions. We'll follow this by passing arguments and, if everything still works, we can actually start to do something w/ those arguments within the subroutines themselves.<br />
<br />
Note that, in practice, one doesn't always know ahead of time exactly what variables are needed within a particular subroutine, and there is some trial and error while figuring it out. Here, we'll assume we've thought this out really well ahead of time and we know exactly what we need to do before we ever started writing the code (like punch-card programmers of old), in which case we know exactly which arguments are needed when we start.<br />
<br />
<br />
First, we'll fill out the rest of the "driver" subroutine, which has two parts, (1) calling the higher-order dynamics subroutines '''run_ho_diagnostic''' to get the appropriate velocity fields and, (2) calling '''fo_upwind_advect_main''', which uses those velocity fields, the thickness field, and then moves mass around to calculate a new thickness field. In subroutine '''fo_upwind_advect_driver''' we add the following ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_driver( model ) ! ADDED ARG<br />
<br />
! driver routine for the 1st order, upwind mass transport scheme<br />
<br />
type(glide_global_type), intent(inout) :: model ! ADDED <br />
<br />
call run_ho_diagnostic(model) ! ADDED CALL AND ARG<br />
<br />
call fo_upwind_advect_main( )<br />
<br />
... </source><br />
<br />
Note that we also passed the model in to the driver and gave "model" a type definition. We also need to add the appropriate use statements for "model" and to access the higher-order dynamics routines, which we'll do below. <br />
<br />
Next, add argument definitions to the main subroutine in '''fo_upwind_advect.F90''' ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_main( thck, stagthck, acab, dt, uflx, vflx, ewn, nsn, dew, dns ) ! CHANGED<br />
<br />
! 1st-order upwinding mass advection<br />
<br />
implicit none ! ADDED<br />
<br />
real (kind = dp), intent(in) :: dt ! time step (sec)<br />
real (kind = dp), dimension(:,:), intent(inout) :: thck ! thickness on normal grid (m)<br />
real (kind = dp), dimension(:,:), intent(in) :: stagthck ! thickness on staggered grid (m)<br />
real (kind = sp), dimension(:,:), intent(in) :: acab ! surf mass balance (accum/ablation) (m/sec)<br />
real (kind = dp), dimension(:,:), intent(in) :: uflx, vflx ! flux in x,y directions (m^2/sec)<br />
real (kind = dp), intent(in) :: dew, dns ! grid spacing in x,y directions (m)<br />
integer, intent(in) :: ewn, nsn ! no. of grid points in x,y directions<br />
<br />
real (kind = dp) :: He, Hw, Hn, Hs, ue, uw, vn, vs ! ADDED<br />
integer :: ew, ns<br />
<br />
end subroutine fo_upwind_advect_main</source><br />
<br />
<br />
Because these definitions require kind "dp", we need to add a use statment at the start of the module. We've also added the necessary use statement for accessing "model" and the higher-order dynamics subroutines (from above) and a few more we'll use later on ...<br />
<br />
<source lang=fortran>module fo_upwind_advect<br />
<br />
! subroutines for mass advection scheme based on 1st order upwinding<br />
<br />
! ADDED<br />
use glimmer_paramets, only: sp, dp, len0, thk0, tim0, vel0, tim0, acc0, scyr ! scales for swapping between dim and non-dim vars<br />
use glide_types<br />
use glide_velo_higher<br />
<br />
private<br />
public :: fo_upwind_advect_init, fo_upwind_advect_driver, fo_upwind_advect_final<br />
<br />
... etc ... </source><br />
<br />
To avoid someone mucking around with variables in the module that we don't want them to touch, we've also added the "private" and "public" statements; the only parts of the module that can be accessed from outside are through calls to the three "public" subroutines. <br />
<br />
<br />
The arguments that we defined above in '''fo_upwind_advect_main''' need to be passed in from the driver routine '''fo_upwind_advect_driver''' to the subroutine that does all of the work, '''fo_upwind_advect_main'''. To do this, we must access them from the derived type "model". The entire driver subroutine then becomes ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_driver( model )<br />
<br />
! driver routine for the 1st order, upwind mass transport scheme<br />
<br />
type(glide_global_type), intent(inout) :: model<br />
<br />
call run_ho_diagnostic(model) ! get velocities and fluxes from HO dynamic subroutines<br />
<br />
call fo_upwind_advect_main( model%geometry%thck, model%geomderv%stagthck, &<br />
model%climate%acab, model%numerics%dt, &<br />
model%velocity_hom%uflx,model%velocity_hom%vflx, &<br />
model%general%ewn, model%general%nsn, &<br />
model%numerics%dew, model%numerics%dns )<br />
<br />
end subroutine fo_upwind_advect_driver</source><br />
<br />
<br />
Finally, we pass the derived type "model" during the call to the driver subroutine in '''glide.F90''' ...<br />
<br />
<source lang=fortran> case(EVOL_FO_UPWIND) ! Use first order upwind scheme for mass transport<br />
<br />
call fo_upwind_advect_driver( model ) ! ADDED ARGUMENT<br />
<br />
end select</source><br />
<br />
Type '''make''' again and check for a successful build. If so, proceed to '''Step 5'''.<br />
<br />
== '''Step 5''' ==<br />
<br />
Now let's fill in the init and finalization subroutines in '''fo_upwind_advect.F90'''. This takes some thought ahead of time and might require a bit of trial and error. Again, for now we'll assume that we've thought this out really well ahead of time and we know exactly what we need.<br />
<br />
First, declare any other necessary variables at the start of the module '''fo_upwind_advection.F90'''. Here, these are the allocatable work arrays that we'll use in '''fo_upwind_advect_main'''. The beginning of the module becomes ...<br />
<br />
<source lang=fortran>module fo_upwind_advect<br />
<br />
!----------------------------------------------------------------------<br />
<br />
! init, finalize, and driver subroutines for mass advection based on 1st order upwinding<br />
<br />
use glimmer_paramets, only: sp, dp, len0, thk0, tim0, vel0, tim0, acc0, scyr<br />
use glide_types<br />
use glide_velo_higher<br />
<br />
private<br />
public :: fo_upwind_advect_init, fo_upwind_advect_driver, fo_upwind_advect_final<br />
<br />
! allocatable work arrays ! ADDED<br />
real (kind = dp), allocatable, dimension(:,:) :: &<br />
ubar, vbar, &<br />
ubar_grid, vbar_grid, &<br />
flux_net, thck_grid, &<br />
mask, thck_old<br />
<br />
contains<br />
<br />
... etc ...</source><br />
<br />
Now change the initialization and finalization subroutines ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_init( ewn, nsn ) ! ADDED ARGS HERE<br />
<br />
! initialization for 1st-order upwinding mass advection<br />
<br />
implicit none ! ADDED<br />
<br />
integer, intent(in) :: ewn, nsn ! horizontal grid dimensions ! ADDED TYPE DEF<br />
<br />
integer :: errstat ! ADDED FOR ERROR HANDLING<br />
<br />
! allocate work arrays ! ADDED THESE<br />
allocate( ubar(ewn-1,nsn-1), stat=errstat ); ubar = 0.0_dp<br />
allocate( vbar(ewn-1,nsn-1), stat=errstat ); vbar = 0.0_dp<br />
allocate( ubar_grid(ewn+1,nsn+1), stat=errstat ); ubar_grid = 0.0_dp<br />
allocate( vbar_grid(ewn+1,nsn+1), stat=errstat ); vbar_grid = 0.0_dp<br />
allocate( thck_grid(ewn+2,nsn+2), stat=errstat ); thck_grid = 0.0_dp<br />
allocate( flux_net(ewn,nsn), stat=errstat ); flux_net = 0.0_dp<br />
allocate( mask(ewn,nsn), stat=errstat ); mask = 0.0_dp<br />
allocate( thck_old(ewn,nsn), stat=errstat ); thck_old = 0.0_dp<br />
<br />
if ( errstat /= 0 ) then ! ADDED FOR ERROR HANDLING<br />
print *, 'error: allocation in fo_upwind_advect failed!'<br />
stop<br />
end if<br />
<br />
end subroutine fo_upwind_advect_init</source><br />
<br />
Note that ewn, nsn are passed in above, so we need to make sure they are passed from the main code <br />
where this call sits. In '''glide.F90''' we have ...<br />
<br />
<source lang=fortran> call fo_upwind_advect_init( model%general%ewn, model%general%nsn ) ! ADDED ARGS</source><br />
<br />
As with the initialization subroutine, we add deallocation statements for work arrays in the finalization subroutine ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_final( )<br />
<br />
! finalization for 1st-order upwinding mass advection<br />
<br />
implicit none ! ADDED<br />
<br />
! deallocate work arrays ! ADDED THESE<br />
if( allocated( ubar ) ) deallocate( ubar )<br />
if( allocated( vbar ) ) deallocate( vbar )<br />
if( allocated( ubar_grid ) ) deallocate( ubar_grid )<br />
if( allocated( vbar_grid ) ) deallocate( vbar_grid )<br />
if( allocated( thck_grid ) ) deallocate( thck_grid )<br />
if( allocated( flux_net ) ) deallocate( flux_net )<br />
if( allocated( mask ) ) deallocate( mask )<br />
if( allocated( thck_old ) ) deallocate( thck_old )<br />
<br />
<br />
end subroutine fo_upwind_advect_final</source><br />
<br />
Type '''make''' again and check for a successful build. If so, proceed to '''Step 6'''.<br />
<br />
== '''Step 6''' ==<br />
<br />
Now we are actually ready to do something ... that is, fill in the guts of the new subroutine, where the thickness evolution calculation takes place. Rather than have you figure out how to code up the 1st-order upwinding scheme on your own, we'll provide you with the chunk of code to do that (below), noting that the details of the calculation scheme are discussed [[Solving the equation for thickness evolution|HERE]]. <br />
<br />
<source lang=fortran>! ----------------------------------<br />
<br />
subroutine fo_upwind_advect_main( thck, stagthck, acab, dt, uflx, vflx, ewn, nsn, dew, dns )<br />
<br />
! 1st-order upwinding mass advection that uses a finite-volume like scheme for <br />
! mass conservation. Velocities from the staggered grid (B-grid) are averaged onto the <br />
! faces of the non-staggered grid (i.e. faces of the grid where scalers like thickness live). <br />
! Thus, the averaged velocities exist on a C-grid, allowing mass transport to be treated <br />
! in a finite-volume manner; depth averaged velocities give the fluxes out of each cell <br />
! centered on a thickness point and the thickness advected is chosen according to upwinding.<br />
! <br />
! Note that this works at the calving front because a non-zero staggered thickness there <br />
! defines the velocities there. These velocites can be used to define the velocity at<br />
! the face of the last non-zero thickness cell (on the normal grid) which corresponds to<br />
! the location of the calving front. <br />
<br />
implicit none<br />
<br />
real (kind = dp), intent(in) :: dt<br />
real (kind = dp), dimension(:,:), intent(inout) :: thck<br />
real (kind = dp), dimension(:,:), intent(in) :: stagthck<br />
real (kind = sp), dimension(:,:), intent(in) :: acab<br />
real (kind = dp), dimension(:,:), intent(in) :: uflx, vflx<br />
real (kind = dp), intent(in) :: dew, dns<br />
integer, intent(in) :: ewn, nsn<br />
<br />
real (kind = dp) :: He, Hw, Hn, Hs, ue, uw, vn, vs ! upwinding variables and interface velocities<br />
<br />
integer :: ew, ns<br />
<br />
where( stagthck > 0.0_dp ) ! calculate the depth-ave velocities<br />
ubar = uflx / stagthck<br />
vbar = vflx / stagthck<br />
end where<br />
<br />
where( thck > 0.0_dp ) ! mask for eventually removing flux outside of the original domain<br />
mask = 1.0_dp ! (i.e. stuff that moves past the calving front goes away)<br />
else where<br />
mask = 0.0_dp<br />
end where<br />
<br />
thck_old = thck ! save the old thickness for debugging purposes<br />
<br />
! fill in the interior values on the extended velocity grid (extended B-grid)<br />
ubar_grid(2:ewn,2:nsn) = ubar<br />
vbar_grid(2:ewn,2:nsn) = vbar<br />
<br />
! fill in the interior values on the extended thickness grid<br />
thck_grid(2:ewn+1,2:nsn+1) = thck(:,:)<br />
<br />
! calculate the interface velocities from the extended B-grid, then use upwinding<br />
! criterion to advect thickness in or out of cells (NOTE that parts of this could<br />
! probably be vectorized at some point)<br />
do ns = 1, nsn<br />
do ew = 1, ewn<br />
<br />
! interface depth-ave velocities<br />
ue = ( ubar_grid(ew+1,ns+1) + ubar_grid(ew+1,ns) ) / 2.0d0<br />
uw = ( ubar_grid(ew,ns+1) + ubar_grid(ew,ns) ) / 2.0d0<br />
vn = ( vbar_grid(ew,ns+1) + vbar_grid(ew+1,ns+1) ) / 2.0d0<br />
vs = ( vbar_grid(ew,ns) + vbar_grid(ew+1,ns) ) / 2.0d0<br />
<br />
! choose thickness to advect based on upwinding<br />
if( ue > 0.0d0 )then<br />
He = - thck_grid(ew+1,ns+1) ! negative signs necessary so that flux to the east<br />
else ! results in mass loss in this volume (and vice versa)<br />
He = - thck_grid(ew+2,ns+1)<br />
end if<br />
if( uw > 0.0d0 )then<br />
Hw = thck_grid(ew,ns+1)<br />
else<br />
Hw = thck_grid(ew+1,ns+1)<br />
end if<br />
if( vn > 0.0d0 )then<br />
Hn = - thck_grid(ew+1,ns+1) ! negative signs here as above for ue, and He<br />
else<br />
Hn = - thck_grid(ew+1,ns+2)<br />
end if<br />
if( vs > 0.0d0 )then<br />
Hs = thck_grid(ew+1,ns)<br />
else<br />
Hs = thck_grid(ew+1,ns+1)<br />
end if<br />
<br />
! net flux into/out of each cell<br />
flux_net(ew,ns) = ( ue*He*dns + uw*Hw*dns + vn*Hn*dew + vs*Hs*dew )<br />
<br />
end do<br />
end do<br />
<br />
thck = thck_old + ( 1 / (dns * dew) * flux_net ) * dt + (acab * dt)<br />
<br />
! debugging<br />
print *, ' '<br />
print *, 'net volume change = ', sum( (thck-thck_old)*mask )*thk0 *dew*dns*len0**2<br />
print *, 'net calving flux = ', sum( thck * (1.0d0-mask) )*thk0*dew*dns*len0**2<br />
print *, '(for the confined shelf experiment, the above two should sum to ~0)'<br />
print *, 'mean accum/ablat rate = ', sum( acab * mask ) / sum(mask) / (dt*tim0) * scyr<br />
print *, 'mean dH/dt = ', sum( (thck-thck_old)*mask )*thk0 / sum(mask) / (dt*tim0) * scyr<br />
print *, 'sum of flux change (should be ~0) = ', sum( flux_net*vel0*thk0*len0 )<br />
print *, ' '<br />
! pause<br />
<br />
thck = thck * mask ! remove any mass advected outside of initial domain<br />
<br />
where( thck < 0.0_dp ) ! gaurd against thickness going negative<br />
thck = 0.0_dp<br />
end where<br />
<br />
end subroutine fo_upwind_advect_main<br />
<br />
! ----------------------------------</source><br />
<br />
==Ice Sheet Evolution==<br />
<br />
At this point, if you have one final successful build, we should be ready to use the code to actually evolve the ice sheet thickness. To try some simple test cases with the higher-order model and ice sheet evolution using the 1st-order scheme, go here:<br />
<br />
* [[Ice Sheet Evolution Experiments]]</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Adding_dH/dt_moduleAdding dH/dt module2009-08-13T21:55:34Z<p>Hoffman: /* Step 1 */</p>
<hr />
<div>==Overview==<br />
<br />
This page contains step-by-step instructions for adding a new module to Glimmer-CISM. In this case, the module is a first-order, upwinding advection scheme for mass transport (dH/dt) using velocities calculated from a higher-order dynamics model. The procedure, however, is generic and could apply to adding almost any module. The goal here is not to get lost in the details of the upwinding scheme (we'll provide the code for that later) but rather to understand the steps for adding a module in an incremental, structured way that allows you to track small errors and fix them at each step. <br />
<br />
*To go directly to some exercises using the higher-order dynamics and the evolution scheme, go to [[Ice Sheet Evolution Experiments]].<br />
*For mathematical background on upwinded transport, consult the [[Solving the equation for thickness evolution]] pages.<br />
<br />
<br />
== '''Step 0''' ==<br />
<br />
We first need to download the code, configure it, and check that we can get a successful initial build (presumably we've done that already if you've been able to do the higher-order test suite exercises). If not, then type the following from within the highest level directory containing the source code:<br />
<br />
./bootstrap<br />
./configure --with-netcdf=/path/to_netcdf !!! Note: specify your own path to netCDF libs here !!!<br />
make<br />
<br />
== '''Step 1''' ==<br />
<br />
In step 1, we will create a module containing "empty" subroutines and add this module to the build.<br />
<br />
Create a module '''fo_upwind_advect.F90''' that contains the necessary subroutines. For now, these will just be "stubs", which we will fill in later.<br />
<br />
<source lang=fortran>module fo_upwind_advect<br />
<br />
! subroutines for mass advection scheme based on 1st order upwinding<br />
<br />
contains<br />
<br />
!----------------------------------------------------------------------<br />
<br />
subroutine fo_upwind_advect_init( )<br />
<br />
! initialization for 1st-order upwinding mass advection<br />
<br />
end subroutine fo_upwind_advect_init<br />
<br />
!----------------------------------------------------------------------<br />
<br />
subroutine fo_upwind_advect_final( )<br />
<br />
! finalization for 1st-order upwinding mass advection<br />
<br />
end subroutine fo_upwind_advect_final<br />
<br />
!----------------------------------------------------------------------<br />
<br />
subroutine fo_upwind_advect_driver( )<br />
<br />
! driver for 1st-order upwind mass advection<br />
<br />
end subroutine fo_upwind_advect_driver<br />
<br />
!----------------------------------------------------------------------<br />
<br />
subroutine fo_upwind_advect_main( )<br />
<br />
! 1st-order upwinding mass advection<br />
<br />
end subroutine fo_upwind_advect_main<br />
<br />
!----------------------------------------------------------------------<br />
<br />
end module fo_upwind_advect</source><br />
<br />
<br />
Add this new module to the build by editing '''Makefile.am''' in '''src/fortran/''' ...<br />
<br />
<source lang=fortran>libglide_a_SOURCES = glide.F90 glide_setup.F90 glide_types.F90 glide_temp.F90 \<br />
glide_bwater.F90 glide_deriv.F90 xls.F90 ice3d_lib.F90 \<br />
glide_velo_higher.F90 glide_thck.F90 glide_velo.F90 \<br />
glide_mask.F90 glide_stop.F90 glide_io.F90 \<br />
glide_nc_custom.F90 isostasy.F90 isostasy_el.F90\<br />
isostasy_setup.F90 isostasy_types.F90 glide_lithot.F90\<br />
glide_lithot3d.F90 glide_lithot1d.F90 glide_profile.F90\<br />
glide_diagnostics.F90 glissade.F90 glissade_remap.F90\<br />
glissade_velo.F90 glissade_constants.F90 glide_vertint.F90\<br />
glide_thckmask.F90 glide_nonlin.F90 glide_grids.F90\<br />
glam.F90 glam_strs2.F90 glam_thck_ppm.F90\<br />
remap_advection.F90 remap_glamutils.F90 glide_ground.F90</source><br />
<br />
becomes ...<br />
<br />
<source lang=fortran>libglide_a_SOURCES = glide.F90 glide_setup.F90 glide_types.F90 glide_temp.F90 \<br />
glide_bwater.F90 glide_deriv.F90 xls.F90 ice3d_lib.F90 \<br />
glide_velo_higher.F90 glide_thck.F90 glide_velo.F90 \<br />
glide_mask.F90 glide_stop.F90 glide_io.F90 \<br />
glide_nc_custom.F90 isostasy.F90 isostasy_el.F90\<br />
isostasy_setup.F90 isostasy_types.F90 glide_lithot.F90\<br />
glide_lithot3d.F90 glide_lithot1d.F90 glide_profile.F90\<br />
glide_diagnostics.F90 glissade.F90 glissade_remap.F90\<br />
glissade_velo.F90 glissade_constants.F90 glide_vertint.F90\<br />
glide_thckmask.F90 glide_nonlin.F90 glide_grids.F90\<br />
glam.F90 glam_strs2.F90 glam_thck_ppm.F90\<br />
remap_advection.F90 remap_glamutils.F90 glide_ground.F90 \ ! CHANGED LAST TWO LINES<br />
fo_upwind_advect.F90</source><br />
<br />
[Note: the comment in the 2nd to last line may cause a problem for make.]<br />
Now we stop adding/editing things for a minute and type '''make''' to rebuild and make sure we haven't broken the code yet. If you get a successful build, then proceed to '''Step 2'''.<br />
<br />
== '''Step 2''' ==<br />
<br />
In step 2, we will do some minor updates to other parts of the code so that there is an option to call our 1st order upwinding scheme when we want to evolve the ice thickness (that is, rather than using some other scheme).<br />
<br />
In '''glide_types.F90''', add the following to the transport scheme options around line 100 ...<br />
<br />
<source lang=fortran> integer, parameter :: EVOL_PSEUDO_DIFF = 0<br />
integer, parameter :: EVOL_ADI = 1<br />
integer, parameter :: EVOL_DIFFUSION = 2<br />
integer, parameter :: EVOL_INC_REMAP = 3<br />
integer, parameter :: EVOL_FO_UPWIND = 4 ! ADDED THIS LINE</source><br />
<br />
... similarly, update some lines around 207 for documentation ...<br />
<br />
<source lang=fortran> !*FD Thickness evolution method:<br />
!*FD \begin{description}<br />
!*FD \item[0] Pseudo-diffusion approach <br />
!*FD \item[2] Diffusion approach (also calculates velocities)<br />
!*FD \item[3] Incremental remapping<br />
!*FD \item[4] 1st-order upwinding scheme ! ADDED THIS LINE<br />
!*FD \end{description}</source><br />
<br />
In '''glide_setup.F90''', around line 475, change ...<br />
<br />
<source lang=fortran> character(len=*), dimension(0:4), parameter :: evolution = (/ & ! CHANGED 0:3 TO 0:4<br />
'pseudo-diffusion ', &<br />
'ADI scheme ', &<br />
'iterated diffusion ', &<br />
'remap thickness ', & <br />
'1st order upwind ' /) ! ADDED THIS LINE</source><br />
<br />
Time to type '''make''' again and check for a successful build. If so, proceed to '''Step 3'''.<br />
<br />
<br />
== '''Step 3''' ==<br />
<br />
Now we will add use statements, if constructs, and dummy calls to the advection subroutines from the appropriate places in the code. Hence, we are setting up all the necessary structure to call the subroutines but are not passing any arguments to them yet (nor are they actually doing anything yet).<br />
<br />
In '''glide.F90''', add the following use statement to subroutine '''glide_initialise''' ...<br />
<br />
<source lang=fortran> use glam_strs2, only : glam_velo_fordsiapstr_init<br />
use remap_glamutils, only : horizontal_remap_init<br />
use fo_upwind_advect, only : fo_upwind_advect_init ! ADDED<br />
</source><br />
<br />
Now add if construct and call to the initialization stub in '''fo_upwind_advect_mod.F90''', around line 190 of '''glide.F90''', just after the if construct for EVOL_INC_REMAP ...<br />
<br />
<source lang=fortran> if (model%options%whichevol== EVOL_FO_UPWIND ) then<br />
call fo_upwind_advect_init( )<br />
end if</source><br />
<br />
In '''glide_stop.F90''', add the following to the use statements ...<br />
<br />
<source lang=fortran>module glide_stop<br />
<br />
use glide_types<br />
use glimmer_log<br />
use remap_glamutils<br />
use fo_upwind_advect, only : fo_upwind_advect_final ! ADDED</source><br />
<br />
Add the necessary if construct to '''glide_stop.F90''' around line 150, as we did with the initialization routine ...<br />
<br />
<source lang=fortran> if (model%options%whichevol== EVOL_FO_UPWIND ) then<br />
call fo_upwind_advect_final( )<br />
endif</source><br />
<br />
Finally, update the use statements in '''glide.F90''' around line 340, and the case construct for ice sheet evolution, around line 380, so that the 1st order upwinding subroutines can be called ...<br />
<br />
<source lang=fortran> use glide_thck<br />
use glide_velo<br />
use glide_ground<br />
use glide_setup<br />
use glide_temp<br />
use glide_mask<br />
use isostasy<br />
use glam, only: inc_remap_driver<br />
use fo_upwind_advect, only: fo_upwind_advect_driver ! ADDED </source><br />
<br />
... and add a call to the driver routine to the case construct (passing no args yet) ...<br />
<br />
<source lang=fortran> case(EVOL_FO_UPWIND) ! Use first order upwind scheme for mass transport<br />
<br />
call fo_upwind_advect_driver( )</source><br />
<br />
Make sure to place it BEFORE the existing "end select" statement!<br />
<br />
Type '''make''' again and check for a successful build. If so, proceed to '''Step 4'''.<br />
<br />
== '''Step 4''' ==<br />
<br />
At this point we've done nothing but build the necessary structures in the code. Now we will start filling in the subroutines with the necessary variable definitions. We'll follow this by passing arguments and, if everything still works, we can actually start to do something w/ those arguments within the subroutines themselves.<br />
<br />
Note that, in practice, one doesn't always know ahead of time exactly what variables are needed within a particular subroutine, and there is some trial and error while figuring it out. Here, we'll assume we've thought this out really well ahead of time and we know exactly what we need to do before we ever started writing the code (like punch-card programmers of old), in which case we know exactly which arguments are needed when we start.<br />
<br />
<br />
First, we'll fill out the rest of the "driver" subroutine, which has two parts, (1) calling the higher-order dynamics subroutines '''run_ho_diagnostic''' to get the appropriate velocity fields and, (2) calling '''fo_upwind_advect_main''', which uses those velocity fields, the thickness field, and then moves mass around to calculate a new thickness field. In subroutine '''fo_upwind_advect_driver''' we add the following ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_driver( model ) ! ADDED ARG<br />
<br />
! driver routine for the 1st order, upwind mass transport scheme<br />
<br />
type(glide_global_type), intent(inout) :: model ! ADDED <br />
<br />
call run_ho_diagnostic(model) ! ADDED CALL AND ARG<br />
<br />
call fo_upwind_advect_main( )<br />
<br />
... </source><br />
<br />
Note that we also passed the model in to the driver and gave "model" a type definition. We also need to add the appropriate use statements for "model" and to access the higher-order dynamics routines, which we'll do below. <br />
<br />
Next, add argument definitions to the main subroutine in '''fo_upwind_advect.F90''' ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_main( thck, stagthck, acab, dt, uflx, vflx, ewn, nsn, dew, dns ) ! CHANGED<br />
<br />
! 1st-order upwinding mass advection<br />
<br />
implicit none ! ADDED<br />
<br />
real (kind = dp), intent(in) :: dt ! time step (sec)<br />
real (kind = dp), dimension(:,:), intent(inout) :: thck ! thickness on normal grid (m)<br />
real (kind = dp), dimension(:,:), intent(in) :: stagthck ! thickness on staggered grid (m)<br />
real (kind = sp), dimension(:,:), intent(in) :: acab ! surf mass balance (accum/ablation) (m/sec)<br />
real (kind = dp), dimension(:,:), intent(in) :: uflx, vflx ! flux in x,y directions (m^2/sec)<br />
real (kind = dp), intent(in) :: dew, dns ! grid spacing in x,y directions (m)<br />
integer, intent(in) :: ewn, nsn ! no. of grid points in x,y directions<br />
<br />
real (kind = dp) :: He, Hw, Hn, Hs, ue, uw, vn, vs ! ADDED<br />
integer :: ew, ns<br />
<br />
end subroutine fo_upwind_advect_main</source><br />
<br />
<br />
Because these definitions require kind "dp", we need to add a use statment at the start of the module. We've also added the necessary use statement for accessing "model" and the higher-order dynamics subroutines (from above) and a few more we'll use later on ...<br />
<br />
<source lang=fortran>module fo_upwind_advect<br />
<br />
! subroutines for mass advection scheme based on 1st order upwinding<br />
<br />
! ADDED<br />
use glimmer_paramets, only: sp, dp, len0, thk0, tim0, vel0, tim0, acc0, scyr ! scales for swapping between dim and non-dim vars<br />
use glide_types<br />
use glide_velo_higher<br />
<br />
private<br />
public :: fo_upwind_advect_init, fo_upwind_advect_driver, fo_upwind_advect_final<br />
<br />
... etc ... </source><br />
<br />
To avoid someone mucking around with variables in the module that we don't want them to touch, we've also added the "private" and "public" statements; the only parts of the module that can be accessed from outside are through calls to the three "public" subroutines. <br />
<br />
<br />
The arguments that we defined above in '''fo_upwind_advect_main''' need to be passed in from the driver routine '''fo_upwind_advect_driver''' to the subroutine that does all of the work, '''fo_upwind_advect_main'''. To do this, we must access them from the derived type "model". The entire driver subroutine then becomes ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_driver( model )<br />
<br />
! driver routine for the 1st order, upwind mass transport scheme<br />
<br />
type(glide_global_type), intent(inout) :: model<br />
<br />
call run_ho_diagnostic(model) ! get velocities and fluxes from HO dynamic subroutines<br />
<br />
call fo_upwind_advect_main( model%geometry%thck, model%geomderv%stagthck, &<br />
model%climate%acab, model%numerics%dt, &<br />
model%velocity_hom%uflx,model%velocity_hom%vflx, &<br />
model%general%ewn, model%general%nsn, &<br />
model%numerics%dew, model%numerics%dns )<br />
<br />
end subroutine fo_upwind_advect_driver</source><br />
<br />
<br />
Finally, we pass the derived type "model" during the call to the driver subroutine in '''glide.F90''' ...<br />
<br />
<source lang=fortran> case(EVOL_FO_UPWIND) ! Use first order upwind scheme for mass transport<br />
<br />
call fo_upwind_advect_driver( model ) ! ADDED ARGUMENT<br />
<br />
end select</source><br />
<br />
Type '''make''' again and check for a successful build. If so, proceed to '''Step 5'''.<br />
<br />
== '''Step 5''' ==<br />
<br />
Now let's fill in the init and finalization subroutines in '''fo_upwind_advect.F90'''. This takes some thought ahead of time and might require a bit of trial and error. Again, for now we'll assume that we've thought this out really well ahead of time and we know exactly what we need.<br />
<br />
First, declare any other necessary variables at the start of the module '''fo_upwind_advection.F90'''. Here, these are the allocatable work arrays that we'll use in '''fo_upwind_advect_main'''. The beginning of the module becomes ...<br />
<br />
<source lang=fortran>module fo_upwind_advect<br />
<br />
!----------------------------------------------------------------------<br />
<br />
! init, finalize, and driver subroutines for mass advection based on 1st order upwinding<br />
<br />
use glimmer_paramets, only: sp, dp, len0, thk0, tim0, vel0, tim0, acc0, scyr<br />
use glide_types<br />
use glide_velo_higher<br />
<br />
private<br />
public :: fo_upwind_advect_init, fo_upwind_advect_driver, fo_upwind_advect_final<br />
<br />
! allocatable work arrays ! ADDED<br />
real (kind = dp), allocatable, dimension(:,:) :: &<br />
ubar, vbar, &<br />
ubar_grid, vbar_grid, &<br />
flux_net, thck_grid, &<br />
mask, thck_old<br />
<br />
contains<br />
<br />
... etc ...</source><br />
<br />
Now change the initialization and finalization subroutines ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_init( ewn, nsn ) ! ADDED ARGS HERE<br />
<br />
! initialization for 1st-order upwinding mass advection<br />
<br />
implicit none ! ADDED<br />
<br />
integer, intent(in) :: ewn, nsn ! horizontal grid dimensions ! ADDED TYPE DEF<br />
<br />
integer :: errstat ! ADDED FOR ERROR HANDLING<br />
<br />
! allocate work arrays ! ADDED THESE<br />
allocate( ubar(ewn-1,nsn-1), stat=errstat ); ubar = 0.0_dp<br />
allocate( vbar(ewn-1,nsn-1), stat=errstat ); vbar = 0.0_dp<br />
allocate( ubar_grid(ewn+1,nsn+1), stat=errstat ); ubar_grid = 0.0_dp<br />
allocate( vbar_grid(ewn+1,nsn+1), stat=errstat ); vbar_grid = 0.0_dp<br />
allocate( thck_grid(ewn+2,nsn+2), stat=errstat ); thck_grid = 0.0_dp<br />
allocate( flux_net(ewn,nsn), stat=errstat ); flux_net = 0.0_dp<br />
allocate( mask(ewn,nsn), stat=errstat ); mask = 0.0_dp<br />
allocate( thck_old(ewn,nsn), stat=errstat ); thck_old = 0.0_dp<br />
<br />
if ( errstat /= 0 ) then ! ADDED FOR ERROR HANDLING<br />
print *, 'error: allocation in fo_upwind_advect failed!'<br />
stop<br />
end if<br />
<br />
end subroutine fo_upwind_advect_init</source><br />
<br />
Note that ewn, nsn are passed in above, so we need to make sure they are passed from the main code <br />
where this call sits. In '''glide.F90''' we have ...<br />
<br />
<source lang=fortran> call fo_upwind_advect_init( model%general%ewn, model%general%nsn ) ! ADDED ARGS</source><br />
<br />
As with the initialization subroutine, we add deallocation statements for work arrays in the finalization subroutine ...<br />
<br />
<source lang=fortran> subroutine fo_upwind_advect_final( )<br />
<br />
! finalization for 1st-order upwinding mass advection<br />
<br />
implicit none ! ADDED<br />
<br />
! deallocate work arrays ! ADDED THESE<br />
if( allocated( ubar ) ) deallocate( ubar )<br />
if( allocated( vbar ) ) deallocate( vbar )<br />
if( allocated( ubar_grid ) ) deallocate( ubar_grid )<br />
if( allocated( vbar_grid ) ) deallocate( vbar_grid )<br />
if( allocated( thck_grid ) ) deallocate( thck_grid )<br />
if( allocated( flux_net ) ) deallocate( flux_net )<br />
if( allocated( mask ) ) deallocate( mask )<br />
if( allocated( thck_old ) ) deallocate( thck_old )<br />
<br />
<br />
end subroutine fo_upwind_advect_final</source><br />
<br />
Type '''make''' again and check for a successful build. If so, proceed to '''Step 6'''.<br />
<br />
== '''Step 6''' ==<br />
<br />
Now we are actually ready to do something ... that is, fill in the guts of the new subroutine, where the thickness evolution calculation takes place. Rather than have you figure out how to code up the 1st-order upwinding scheme on your own, we'll provide you with the chunk of code to do that (below), noting that the details of the calculation scheme are discussed [[Solving the equation for thickness evolution|HERE]]. <br />
<br />
<source lang=fortran>! ----------------------------------<br />
<br />
subroutine fo_upwind_advect_main( thck, stagthck, acab, dt, uflx, vflx, ewn, nsn, dew, dns )<br />
<br />
! 1st-order upwinding mass advection that uses a finite-volume like scheme for <br />
! mass conservation. Velocities from the staggered grid (B-grid) are averaged onto the <br />
! faces of the non-staggered grid (i.e. faces of the grid where scalers like thickness live). <br />
! Thus, the averaged velocities exist on a C-grid, allowing mass transport to be treated <br />
! in a finite-volume manner; depth averaged velocities give the fluxes out of each cell <br />
! centered on a thickness point and the thickness advected is chosen according to upwinding.<br />
! <br />
! Note that this works at the calving front because a non-zero staggered thickness there <br />
! defines the velocities there. These velocites can be used to define the velocity at<br />
! the face of the last non-zero thickness cell (on the normal grid) which corresponds to<br />
! the location of the calving front. <br />
<br />
implicit none<br />
<br />
real (kind = dp), intent(in) :: dt<br />
real (kind = dp), dimension(:,:), intent(inout) :: thck<br />
real (kind = dp), dimension(:,:), intent(in) :: stagthck<br />
real (kind = sp), dimension(:,:), intent(in) :: acab<br />
real (kind = dp), dimension(:,:), intent(in) :: uflx, vflx<br />
real (kind = dp), intent(in) :: dew, dns<br />
integer, intent(in) :: ewn, nsn<br />
<br />
real (kind = dp) :: He, Hw, Hn, Hs, ue, uw, vn, vs ! upwinding variables and interface velocities<br />
<br />
integer :: ew, ns<br />
<br />
where( stagthck > 0.0_dp ) ! calculate the depth-ave velocities<br />
ubar = uflx / stagthck<br />
vbar = vflx / stagthck<br />
end where<br />
<br />
where( thck > 0.0_dp ) ! mask for eventually removing flux outside of the original domain<br />
mask = 1.0_dp ! (i.e. stuff that moves past the calving front goes away)<br />
else where<br />
mask = 0.0_dp<br />
end where<br />
<br />
thck_old = thck ! save the old thickness for debugging purposes<br />
<br />
! fill in the interior values on the extended velocity grid (extended B-grid)<br />
ubar_grid(2:ewn,2:nsn) = ubar<br />
vbar_grid(2:ewn,2:nsn) = vbar<br />
<br />
! fill in the interior values on the extended thickness grid<br />
thck_grid(2:ewn+1,2:nsn+1) = thck(:,:)<br />
<br />
! calculate the interface velocities from the extended B-grid, then use upwinding<br />
! criterion to advect thickness in or out of cells (NOTE that parts of this could<br />
! probably be vectorized at some point)<br />
do ns = 1, nsn<br />
do ew = 1, ewn<br />
<br />
! interface depth-ave velocities<br />
ue = ( ubar_grid(ew+1,ns+1) + ubar_grid(ew+1,ns) ) / 2.0d0<br />
uw = ( ubar_grid(ew,ns+1) + ubar_grid(ew,ns) ) / 2.0d0<br />
vn = ( vbar_grid(ew,ns+1) + vbar_grid(ew+1,ns+1) ) / 2.0d0<br />
vs = ( vbar_grid(ew,ns) + vbar_grid(ew+1,ns) ) / 2.0d0<br />
<br />
! choose thickness to advect based on upwinding<br />
if( ue > 0.0d0 )then<br />
He = - thck_grid(ew+1,ns+1) ! negative signs necessary so that flux to the east<br />
else ! results in mass loss in this volume (and vice versa)<br />
He = - thck_grid(ew+2,ns+1)<br />
end if<br />
if( uw > 0.0d0 )then<br />
Hw = thck_grid(ew,ns+1)<br />
else<br />
Hw = thck_grid(ew+1,ns+1)<br />
end if<br />
if( vn > 0.0d0 )then<br />
Hn = - thck_grid(ew+1,ns+1) ! negative signs here as above for ue, and He<br />
else<br />
Hn = - thck_grid(ew+1,ns+2)<br />
end if<br />
if( vs > 0.0d0 )then<br />
Hs = thck_grid(ew+1,ns)<br />
else<br />
Hs = thck_grid(ew+1,ns+1)<br />
end if<br />
<br />
! net flux into/out of each cell<br />
flux_net(ew,ns) = ( ue*He*dns + uw*Hw*dns + vn*Hn*dew + vs*Hs*dew )<br />
<br />
end do<br />
end do<br />
<br />
thck = thck_old + ( 1 / (dns * dew) * flux_net ) * dt + (acab * dt)<br />
<br />
! debugging<br />
print *, ' '<br />
print *, 'net volume change = ', sum( (thck-thck_old)*mask )*thk0 *dew*dns*len0**2<br />
print *, 'net calving flux = ', sum( thck * (1.0d0-mask) )*thk0*dew*dns*len0**2<br />
print *, '(for the confined shelf experiment, the above two should sum to ~0)'<br />
print *, 'mean accum/ablat rate = ', sum( acab * mask ) / sum(mask) / (dt*tim0) * scyr<br />
print *, 'mean dH/dt = ', sum( (thck-thck_old)*mask )*thk0 / sum(mask) / (dt*tim0) * scyr<br />
print *, 'sum of flux change (should be ~0) = ', sum( flux_net*vel0*thk0*len0 )<br />
print *, ' '<br />
! pause<br />
<br />
thck = thck * mask ! remove any mass advected outside of initial domain<br />
<br />
where( thck < 0.0_dp ) ! gaurd against thickness going negative<br />
thck = 0.0_dp<br />
end where<br />
<br />
end subroutine fo_upwind_advect_main<br />
<br />
! ----------------------------------</source><br />
<br />
==Ice Sheet Evolution==<br />
<br />
At this point, if you have one final successful build, we should be ready to use the code to actually evolve the ice sheet thickness. To try some simple test cases with the higher-order model and ice sheet evolution using the 1st-order scheme, go here:<br />
<br />
* [[Ice Sheet Evolution Experiments]]</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Community_ActivitiesCommunity Activities2009-08-13T20:59:42Z<p>Hoffman: /* Summer Modeling School */</p>
<hr />
<div>==Overview==<br />
It is unlikely that any single research group will be able to significantly reduce the uncertainties in sea level assessment. The effort will instead have to be collaborative in nature; involving numerous groups, each working on some managable component of the whole. Here, some of the techniques being used to assemble a community of modelers to a common purpose are detailed.<br />
<br />
===[[SeaRISE Assessment]]===<br />
''Perhaps the most important thing that we can do is to formulate a set of modeling experiments that most agree will produce important results for sea level assessment. Documents here represent the current state of that effort.''<br />
<br />
===[[Summer Modeling School]]===<br />
''To train modelers, a summer school has been organized.''<br />
<br />
===[http://oceans11.lanl.gov/trac/CISM Teleconferences]===<br />
Focus groups on each of the following topics meet regularly in teleconferences to discuss progress. <br />
*[http://oceans11.lanl.gov/trac/CISM/wiki/AssessmentGroup Assessment]<br />
*[http://oceans11.lanl.gov/trac/CISM/wiki/SoftwareGroup Software design]<br />
*[http://oceans11.lanl.gov/trac/CISM/wiki/HydrologyGroup Basal, surface, and englacial hydrology]<br />
*[http://oceans11.lanl.gov/trac/CISM/wiki/DatasetsGroup Datasets]<br />
*[http://oceans11.lanl.gov/trac/CISM/wiki/IceOceanCouplingGroup Ice/Ocean coupling]<br />
*[http://oceans11.lanl.gov/trac/CISM/wiki/CalvingGroup Calving]<br />
<br />
Notes from the teleconferences are available through the above links. LANL has been instrumental in organizing and facilitating these teleconferences.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Grounding_line_treatmentsGrounding line treatments2009-08-13T19:16:03Z<p>Hoffman: </p>
<hr />
<div>This [[Media:Gl_treatments.pdf |presentation]] aims at providing a wide overview of grounding line treatments and the concept of marine ice sheet instability. Not all existing methods will be covered, the focus is on 2-D treatment of marine ice sheet that flow into unconfined ice shelves (Ie: ignore the topic of back pressure). A word of warning: new work on grounding line migration appear regularly, so keep your eyes open…<br />
<br />
<br />
'''Required reading:'''<br />
<br />
Vieli, A. and A. Payne (2005). Assessing the ability of numerical ice sheet<br />
models to simulate grounding line migration. Journal of Geophysical Research 110, F01003.<br />
<br />
Schoof, C. (2007). Ice sheet grounding line dynamics: Steady states, stability,<br />
and hysteresis. Journal of Geophysical Research 112, F03528. [http://www.seas.harvard.edu/climate/pdf/2007/hysteresis.pdf pdf]<br />
<br />
<br />
'''Some references:'''<br />
<br />
Barcilon,V., MacAyeal, D.R., (1993). Steady flow of a viscous ice stream<br />
across a no-slip/free-slip transition at the bed. J. Glaciol. 39 (131),<br />
167–185. [pdf][http://geosci.uchicago.edu/people/Barcilon_macayeal.pdf]<br />
<br />
Chugunov, V.A., Wilchinsky, A.V., (1996). Modelling of a marine<br />
glacier and ice-sheet-ice-shelf transition zone based on asymptotic<br />
analysis. Ann. Glaciol. 23, 59–67.<br />
<br />
Durand, G., O. Gagliardini, T. Zwinger and E. Le Meur (2009). Full-Stokes modeling of marine ice-sheets: influence of the grid size, Annals of Glaciology.<br />
<br />
Hindmarsh, R. (1993). Qualitative dynamics of marine ice sheets. In W. Peltier<br />
(Ed.), Ice in the Climate System, Volume I 12, pp. 67–199. NATO ASI Series,<br />
Springer-Verlag Berlin Heidelberg.<br />
<br />
Lestringant, R., (1994). A two-dimensional finite element study of flow<br />
in the transition zone between an ice sheet and an ice shelf. Ann.<br />
Glaciol. 20, 67–72.<br />
<br />
Nowicki S. M. J., D. J. Wingham (2008). Conditions for a steady ice sheet – ice shelf junction, Earth and Planetary Science Letters, doi 10.1016/j.epsl.2007.10.018<br />
<br />
Pollard D., R. M. DeConto (2009). Modelling West Antarctic ice sheet growth and collapse through the past five million years, Nature, doi 0.1037/nature07809</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Grounding_line_treatmentsGrounding line treatments2009-08-13T19:15:45Z<p>Hoffman: </p>
<hr />
<div>This [[Media:Gl_treatments.pdf |presentation]] aims at providing a wide overview of grounding line treatments and the concept of marine ice sheet instability. Not all existing methods will be covered, the focus is on 2-D treatment of marine ice sheet that flow into unconfined ice shelves (Ie: ignore the topic of back pressure). A word of warning: new work on grounding line migration appear regularly, so keep your eyes open…<br />
<br />
<br />
'''Required reading:'''<br />
<br />
Vieli, A. and A. Payne (2005). Assessing the ability of numerical ice sheet<br />
models to simulate grounding line migration. Journal of Geophysical Research 110, F01003.<br />
<br />
Schoof, C. (2007). Ice sheet grounding line dynamics: Steady states, stability,<br />
and hysteresis. Journal of Geophysical Research 112, F03528. [http://www.seas.harvard.edu/climate/pdf/2007/hysteresis.pdf | pdf]<br />
<br />
<br />
'''Some references:'''<br />
<br />
Barcilon,V., MacAyeal, D.R., (1993). Steady flow of a viscous ice stream<br />
across a no-slip/free-slip transition at the bed. J. Glaciol. 39 (131),<br />
167–185. [pdf][http://geosci.uchicago.edu/people/Barcilon_macayeal.pdf]<br />
<br />
Chugunov, V.A., Wilchinsky, A.V., (1996). Modelling of a marine<br />
glacier and ice-sheet-ice-shelf transition zone based on asymptotic<br />
analysis. Ann. Glaciol. 23, 59–67.<br />
<br />
Durand, G., O. Gagliardini, T. Zwinger and E. Le Meur (2009). Full-Stokes modeling of marine ice-sheets: influence of the grid size, Annals of Glaciology.<br />
<br />
Hindmarsh, R. (1993). Qualitative dynamics of marine ice sheets. In W. Peltier<br />
(Ed.), Ice in the Climate System, Volume I 12, pp. 67–199. NATO ASI Series,<br />
Springer-Verlag Berlin Heidelberg.<br />
<br />
Lestringant, R., (1994). A two-dimensional finite element study of flow<br />
in the transition zone between an ice sheet and an ice shelf. Ann.<br />
Glaciol. 20, 67–72.<br />
<br />
Nowicki S. M. J., D. J. Wingham (2008). Conditions for a steady ice sheet – ice shelf junction, Earth and Planetary Science Letters, doi 10.1016/j.epsl.2007.10.018<br />
<br />
Pollard D., R. M. DeConto (2009). Modelling West Antarctic ice sheet growth and collapse through the past five million years, Nature, doi 0.1037/nature07809</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/ISMIP-HOM_test_suite_exerciseISMIP-HOM test suite exercise2009-08-13T16:51:08Z<p>Hoffman: </p>
<hr />
<div>== Introduction ==<br />
<br />
In this exercise, we will test out Glimmer/CISM's higher-order stress balance subroutines by running the model through a few of the [http://homepages.ulb.ac.be/~fpattyn/ismip/ ISMIP-HOM] test suite problems. The tests we'll run are for 3d models, so the domain and boundary conditions vary in the ''x'' and ''y'' directions (i.e. in map plane). For test A, the topography varies periodically in ''x'' and ''y'', and for test C, the basal traction varies periodically in ''x'' and ''y''. While the amplitude of the variations is the same for all tests, the wavelength is decreased by a factor of two for each successive test. For &lambda;=160 km, the velocities solutions essentially look like that from a shallow ice model. Halving &lambda; to 80 km, then to 40, 20, 10, and finally 5 km, the higher-order components of the stress balance become successively more important to the velocity solution. Figures 1 and 2 below shows relevant input data for each of the two experiments for &lambda; = 80km. Here, in the interest of time, we will only run tests for the first three wavelengths in the series (160, 80, and 40 km).<br />
<br />
<br />
'''Figure 1:''' ISMIP-HOM test A input (periodic basal roughness with no sliding); ice thickness, basal topography, and surface elevation. The basal boundary condition is no slip and the lateral boundary conditions are periodic velocities in ''x'' and ''y''.<br />
<br />
[[Image:ismiphom.a.jpg]]<br />
<br />
<br />
'''Figure 2:''' ISMIP-HOM test C input (sliding according to periodic basal traction); ice thickness,bBetasquared, and surface elevation. Sliding takes place along the basal boundary according to a "betasquared" (traction) type sliding law. The lateral boundary conditions are periodic velocities in ''x'' and ''y'' (NOTE: In this experiment we have a slab of constant thickness on an inclined plane with only the sliding properties changing along/across the domain. Also, the surface slope is ~10x smaller than for experiment A).<br />
<br />
[[Image:ismiphom.c.jpg]]<br />
<br />
== Getting started ==<br />
<br />
To get started, we first need to get the source code. You'll find it [http://ulysses.cs.umt.edu/~jessej/sms/ HERE] labeled '''glimmer-cism-ho-wo.tar.gz''' (NOTE: Make sure to get the file with the "wo" extension!!). After downloading the files to your home directory, untar them,<br />
<br />
tar -xvf glimmer-cism-ho-wo.tar.gz<br />
<br />
Then, cd into the directory '''glimmer-cism-ho-wo'''. You now need to configure and build the code by executing the following commands:<br />
<br />
./bootstrap<br />
./configure --with-netcdf=/path/to_netcdf !!! Note: specify your own path to netCDF libs here !!! (e.g. /home/PSU/yourlogin/installs/netcdf-4.0.1 )<br />
<br />
make<br />
<br />
To check if you've had a successful build, cd into '''glimmer-cism-ho-wo/src/fortran/'''. If there is a '''simple_glide''' file there, you were successful.<br />
<br />
For the various python scripts we'll use to interact with and create netCDF files, we'll need to install an additional python library, PYCDF. Instructions for doing that can be found at [[Pycdf|HERE]]. You should now be ready to run the model.<br />
<br />
== Running the test cases ==<br />
<br />
To set up the experiments, we will use some configuration files and python scripts developed by Tim Bocek and Jesse Johnson (also, see this [[Validation and Verification|link]]). These set the correct flags, so that Glimmer/CISM calls the necessary subroutines, and construct the necessary input netCDF files.<br />
<br />
First, we need to change into the correct directory where the test scripts and configuration files live. Assuming that you are starting in the directory from the directory '''glimmer-cism-ho-wo''', type<br />
<br />
cd tests/ISMIP-HOM/; ls -l<br />
<br />
to change into the appropriate directory and list its contents. The files with a '''.config''' extension are read by the appropriate python scripts to construct the appropriate fields for the input netCDF file. The '''.config''' files are also read by the model at run time, as they specify the values for various flags (including calls to the HO subroutines rather than the shallow ice dynamics routines). The files with a '''.py''' extension are the relevant python scripts.<br />
<br />
Let's set up test cases A and C for a domain length of 160 km. First check the configuration file to make sure that the domain length, number of grid spaces, and the grid spacing give the correct input values. <br />
<br />
emacs ishom.a.config &<br />
<br />
gives <br />
<br />
[grid]<br />
upn = 11<br />
ewn = 51<br />
nsn = 51<br />
dew = 3200<br />
dns = 3200<br />
<br />
for the grid variables. Note that 51 x 3200 = 163.2 km, so that our domain will actually have one 3.2 km grid space ''extra'' in each of the ''x'' and ''y'' directions. This is necessary in order to implement the periodic boundary conditions at the lateral domain boundaries. If we now type<br />
<br />
python ismip_hom_a.py ishom.a.config<br />
<br />
we will generate the necessary netCDF input file for the experiment. Using NCVIEW, you can look at the input data fields and make sure that they are the correct lateral dimensions. To run the model using these input data, we need to execute '''simple_glide'''. If that file is not in your path you can copy it into the test directory as follows <br />
<br />
cp ../../src/fortran/simple_glide ./<br />
<br />
and then execute it by typing<br />
<br />
./simple_glide<br />
<br />
You will be prompted for the relevant configuration file, which is of course '''ishom.a.config'''. Another way to do this would have been to use the linux/unix pipe command<br />
<br />
echo ishom.a.config | ./simple_glide - or - echo ishom.a.config | simple_glide <br />
(the latter if "simple_glide" is already in your path)<br />
<br />
Either way, after responding to the prompt, you ''should'' see some model output that looks something like this:<br />
<br />
Running Payne/Price higher-order dynamics solver<br />
<br />
iter # uvel resid vvel resid target resid<br />
<br />
2 1.00000 1.00000 0.100000E-04<br />
3 0.143388 0.215689E-02 0.100000E-04<br />
4 0.392197E-01 0.902012E-03 0.100000E-04<br />
5 0.655156E-01 0.745786E-03 0.100000E-04<br />
6 0.503367E-01 0.465037E-03 0.100000E-04<br />
7 0.344782E-02 0.329053E-03 0.100000E-04<br />
8 0.100065E-01 0.256138E-03 0.100000E-04<br />
9 0.163779E-01 0.190435E-03 0.100000E-04<br />
10 0.866196E-02 0.136968E-03 0.100000E-04<br />
11 0.549490E-02 0.104118E-03 0.100000E-04<br />
12 0.546429E-02 0.798739E-04 0.100000E-04<br />
<br />
At this point, you know that the higher-order dynamics routine is working on a solution. The 1st column tells you which "outer loop" iteration you are on (that is, iteration on the effective viscosity - the "inner loop" iteration is the conjugate gradient iterative solution to the matrix inversion, and that output is normally suppressed). The 2nd and 3rd columns display the ''x'' (uvel) and ''y'' (vvel) residuals (the normalized, maximum change in the velocity field between the current and previous iterations) and the last column shows the target residual, at which point the solution is considered to be converged. When the model stops iterating it will create an output netCDF file that we can evaluate. In this case, the file name is '''ishom.a.out.nc'''.<br />
<br />
<br />
*'''HAVING PROBLEMS?''' <br />
If you find that something weird is happening like your residuals are all over the place, the model just crashes, etc., trying replacing your configuration files with these [[default configuration files for ISMIP-HOM test]].<br />
<br />
== Plotting model output ==<br />
<br />
We can do a quick evaluation of '''ishom.a.out.nc''' using NCVIEW or some other netCDF file viewing utility. However, what we really want to do is compare our model solutions with the ISMIP-HOM benchmark solutions, so we can see how are model is doing relative to other models that took place in the benchmark exercise (see [http://www.the-cryosphere.net/2/95/2008/tc-2-95-2008.html Pattyn et. al (2008)] for a detailed discussion of the results). To do that, we'll use some of handy python test-suite scripts we mentioned above.<br />
<br />
First, we need to generate a text file of output data, from our model result, which will be compared with other model results from the benchmarking study. To do that, we type<br />
<br />
./formatData.py a ishom.a.out.nc glm1a160.txt<br />
<br />
The script '''formatData.py''' reads the netCDF output file and generates the text file '''glm1a160.txt''' (the "...a160..." denotes test A for a domain length of 160 km). We then type <br />
<br />
./createVisuals.py --exp=a --size=160 -or- ./createVisuals.py -ea -s160<br />
<br />
which uses some python modules to make a nice ''Matlab'' style figure (see below). That figure will have a ".png" extension and can be found in<br />
tests/ISMIP-HOM/ <br />
<br />
<br />
Now, do the same set of steps but decrease the domain wavelength to 80 and then 40 km. To make it easy for you, some grid parameters that work for this are<br />
<br />
[grid]<br />
upn = 11<br />
ewn = 51<br />
nsn = 51<br />
dew = 1600<br />
dns = 1600 <br />
<br />
for the 80 km test and <br />
<br />
[grid]<br />
upn = 11<br />
ewn = 51<br />
nsn = 51<br />
dew = 800<br />
dns = 800<br />
<br />
for the 40 km test. Output files for these tests will be generated in the same way as above, <br />
<br />
./formatData.py a ishom.a.out.nc glm1a080.txt - and - ./formatData.py a ishom.a.out.nc glm1a040.txt<br />
<br />
To plot all of the three ouputs for test A on one plot, use<br />
<br />
./createVisuals.py --exp=a --size=40,80,160 - or - ./createVisuals.py -ea -s40,80,160<br />
<br />
which should give you something that looks similar to Figure 3.<br />
<br />
<br />
<br />
<br />
'''Figure 3:''' Higher-order model output for ISMIP-HOM test A with domain lengths of 160, 80, and 40 km. Solid black line is output from the current model and the colored, shaded regions represent the standard deviation of other models participating in the benchmarking study (see [[Validation and Verification#Higher Order Isothermal Flow|HERE]] for a more detailed description of these plots). <br />
<br />
[[Image:ISMIP-HOM-A-glm1.png]]<br />
<br />
<br />
<br />
Now go through the same set of steps for test case C (again, with wavelengths of 160, 80, and 40 km). You should get a figure that looks something like Figure 4.<br />
<br />
<br />
<br />
<br />
'''Figure 4:''' Higher-order model output for ISMIP-HOM test C with domain lengths of 160, 80, and 40 km. Solid black line is output from the current model and the colored, shaded regions represent the standard deviation of other models participating in the benchmarking study (see [[Validation and Verification#Higher Order Isothermal Flow|HERE]] for a more detailed description of these plots). <br />
<br />
[[Image:ISMIP-HOM-C-glm1.png]]<br />
<br />
== Additional Exercises ==<br />
<br />
* Try adjusting the horizontal and vertical grid spacing to see how it affects the results and/or model performance. For example, for the 80km tests, decrease the number of horizontal grid cells by a factor of two and increase the grid spacing by a factor of two,<br />
<br />
[grid]<br />
upn = 11<br />
ewn = 26<br />
nsn = 26<br />
dew = 3200<br />
dns = 3200 <br />
<br />
How much faster does the model converge on a solution? Does the output still fall within the standard deviation given by the benchmarks? How What happens if the vertical resolution is doubled?<br />
<br />
* Compare higher-order and 0-order solutions for test A with the 80 km domain length. To do this, in '''ishom.a.config''', set the ''diagnostic_run'' flag to 0 instead of 1, rebuild the '''ishom.a.nc''' file using the python script (as done above), and re-run the model. When the model has finished running, examine '''ishom.a.out.nc''' using NCVIEW. Click on the variable ''uvelhom'' to make a colormap of the higher-order ''x'' component of velocity at time 1 (as shown in figure below). <br />
<br />
<br />
'''Figure 4:''' Using NCVIEW to plot output of "ishom.a.out.nc" to compare higher-order and SIA solutions. Note that the value of ''current time'' is 2001, not 2000.<br />
<br />
[[Image: ncviewHO.png]]<br />
<br />
<br />
Click somewhere on the image to get a 2d velocity profile (choose ''x0'' under ''Xaxis''). Next, pick the variable ''uvel'' (the velocity from the SIA model) and do the same thing. When comparing the two profiles you should see something like in Figure 5.<br />
<br />
<br />
'''Figure 5:''' Comparison of higher-order (top) and SIA (bottom) velocity profiles. For the same model domain, the HO velocities are ~25% slower due to the influence of horizontal-stress gradients, which the SIA model does not "feel" at all.<br />
<br />
[[Image: ishoma-80km-HOvsSIA.jpg]]<br />
<br />
<br />
* Do the same for ISMIP-HOM test A for the 40 km domain. You should notice that, as the magnitude of the higher-order velocities continues to decrease with decreasing domain length, those for the SIA model do not. Why is this?<br />
<br />
* Compare the values of the variable ''vvel'' (the across-flow velocity calculated from the SIA model) and ''vvelhom'' (the across-flow velocity calculated from the higher-order model) at time 200001. Can you explain the differences?</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Introduction_to_Glimmer_IIIntroduction to Glimmer II2009-08-12T19:09:20Z<p>Hoffman: /* Viewing the output */</p>
<hr />
<div>In this session, we're going to download Glimmer-CISM and learn how to compile and run some simple examples using the core ice-sheet model. <br />
<br />
==Preliminaries==<br />
<br />
Before we can compile Glimmer, we need to have some important prerequisite in place: the [http://www.unidata.ucar.edu/software/netcdf/ NetCDF] library. There's more information about what NetCDF is and how we use it [[Representing and manipulating data|here]], but for now, all you need to know is that NetCDF is a library used by Glimmer for reading and writing data files.<br />
<br />
Installing NetCDF is relatively straightforward. The source code can be obtained as a [http://en.wikipedia.org/wiki/Tar_(file_format) tarball] from the NetCDF website:<br />
<br />
:[http://www.unidata.ucar.edu/downloads/netcdf/ftp/netcdf-4.0.1.tar.gz http://www.unidata.ucar.edu/downloads/netcdf/ftp/netcdf-4.0.1.tar.gz]<br />
<br />
Save this somewhere convenient: I tend to have a directory called ''downloads'' where I can build software like NetCDF, and another directory called ''installs'', where I install the compiled binaries. Assuming you've downloaded the NetCDF tarball into your ''~/downloads'' directory, here's what you need to do next...<br />
<br />
First, change to the appropriate directory, unpack the tarball, and change to the NetCDF directory:<br />
<br />
cd ~/downloads<br />
tar xzvf netcdf-4.0.1.tar.gz<br />
cd netcdf-4.0.1<br />
<br />
NetCDF uses standard Linux build tools ([http://www.gnu.org/software/autoconf/ autoconf] and [http://www.gnu.org/software/automake/ automake]), so installation is relatively straightforward &mdash; the basic sequence is ''configure'' then ''make''. In addition, we need to specify where we want to install the binaries. We configure the build like this:<br />
<br />
./configure --prefix=$HOME/installs/netcdf-4.0.1<br />
<br />
Configure produces lots of diagnostic output. For example, it will tell you which Fortran compiler it finds, and confirm that the f90 interface will be built. To compile and install the binaries, we just need to run ''make'':<br />
<br />
make<br />
<br />
NetCDF comes with a thorough test suite &mdash; a commendable thing &mdash; so we should make use of this before continuing:<br />
<br />
make check<br />
<br />
Watch the output to check you get messages saying the test have been passed. If all is well, we can install the binaries in our chosen location:<br />
<br />
make install<br />
<br />
We could actually accomplish the build and installation with this command on its own (''make'' knows not to try and install something which hasn't been built), but it's convenient to separate the steps, to aid troubleshooting.<br />
<br />
NetCDF comes with a couple of useful utilities, ''ncdump'' and ''ncgen'', now installed in the place we specified. However, we can't easily use them from the command-line at the moment, so we need to put the installation directory into our [http://en.wikipedia.org/wiki/Path_(variable) path]. We can do this at the shell prompt, using this command:<br />
<br />
export PATH=$PATH:$HOME/installs/netcdf-4.0.1/bin<br />
<br />
However, it's convenient to also add this line to your shell configuration file (''~/.bashrc''), so that the environment variable will be set whenever you open a new terminal window.<br />
<br />
==Compiling Glimmer==<br />
<br />
The first step is to acquire the model code. In common with many open-source projects, the Glimmer-CISM team makes regular ''releases'' of code &mdash; these are numbered snapshots of the code, and represent a stable version of the model, in comparison to the development repository. The latest release can be downloaded from the Glimmer-CISM website<ref>Glimmer-CISM is hosted on the BerliOS repository in Germany</ref>:<br />
<br />
:[http://developer.berlios.de/projects/glimmer-cism http://developer.berlios.de/projects/glimmer-cism]<br />
<br />
In the centre of the page is a list of latest file releases, comprising not only Glimmer, but also packages of related code. Click on the ''download'' link for the latest Glimmer release; you'll be taken to a list of glimmer packages, where you can click on the highlighted Glimmer release (version 1.0.18), and and save it somewhere convenient. As before, we need to change to the appropriate directory, unpack the tarball and go into the Glimmer directory:<br />
<br />
cd ~/downloads<br />
tar xzvf glimmer-1.0.18.tar.gz<br />
cd glimmer-1.0.18<br />
<br />
Glimmer also uses autoconf and automake, and, as with NetCDF, we'll want to specify an install destination using the ''--prefix'' flag. However, we also need to tell the configure script where to find the NetCDF library, as well as the name of the compiler we want to use:<br />
<br />
./configure --prefix=$HOME/installs/glimmer-1.0.18 --with-netcdf=$HOME/installs/netcdf-4.0.1 FC=gfortran F77=gfortran<br />
<br />
Now, we can run ''make'' and ''make install'', as before. Glimmer takes a while to build, but when it's finished you can look in ''~/installs/glimmer-1.0.18/bin'' to find the executables, and ''~/installs/glimmer-1.0.18/lib'' to see the libraries which have been built.<br />
<br />
The final step is to add the location of the Glimmer executables to your path variable, which we do in the same way as before:<br />
<br />
export PATH=$PATH:$HOME/installs/glimmer-1.0.18/bin<br />
<br />
And of course we can add this to our ''.bashrc'' file as well.<br />
<br />
==EISMINT 1 examples==<br />
<br />
The EISMINT project was a pioneering model intercomparison exercise involving an international collection of ice sheet models. The project defined a number of benchmark experiments as the basis for the intercomparison, and although other verification and comparison tests have been devised since then, the EISMINT experiments are still relevant and instructive. All EISMINT scenarios are designed to be used with shallow ice models: the first set of scenarios (EISMINT 1) concern isothermal models, while the second set (EISMINT 2) introduce thermomechanical coupling and basal sliding.<br />
<br />
We'll take a look at the relatively simple EISMINT 1 tests to start with, and move on to the more interesting EISMINT 2 examples after that. One aspect of interest is the validation of Glimmer against these test scenarios has been published in Rutt et al. (2009)<ref name="Rutt2009">Rutt, I. C., M. Hagdorn, N. R. J. Hulton, and A. J. Payne. The Glimmer community ice sheet model. ''Journal of Geophysical Research'', '''114''', F02004. {{doi|10.1029/2008JF001015}} [[Media:Ruttetal2009.pdf|pdf]]</ref>, so you can check your results against those in the paper.<br />
<br />
The EISMINT 1 and EISMINT 2 experiment scenarios are both available as part of a package called ''glimmer-tests''. The latest version is numbered 1.2, and can be downloaded from the Glimmer website (see above). Unpack the tarball and ''cd'' into the resulting directory. If you list the contents of the top-level directory, you'll see directories for various test suites. It's possible to use ''configure'' and ''make'' to run each suite of test cases, but it's more instructive to look at them individually. So, let's change to the EISMINT 1 directory and see what we've got:<br />
<br />
cd EISMINT-1<br />
ls<br />
<br />
There are configuration files for six experiments, as described in the EISMINT 1 paper (Huybrechts et al., 1996)<ref name="EISMINT1">Huybrechts P, T Payne, and the EISMINT Intercomparison Group (1996) The EISMINT benchmarks for testing ice-sheet models. ''Annals of Glaciology'', '''23''' 1-14[http://homepages.vub.ac.be/~phuybrec/pdf/Huyb.Ann.Glac.23.pdf]</ref>:<br />
<br />
e1-fm.1.config e1-fm.3.config e1-mm.2.config<br />
e1-fm.2.config e1-mm.1.config e1-mm.3.config<br />
<br />
We run any of these files using the ''simple_glide'' executable which gets built as part of Glimmer. Assuming the Glimmer executables are in your ''PATH'', you can just run this at the command line:<br />
<br />
simple_glide<br />
<br />
You'll be prompted to enter the name of the configuration file, so choose one of the ones listed. In these filenames, '''fm''' is fixed margin, '''mm''' is moving margin, and the number refers to the type of forcing: '''1''' is steady, '''2''' and '''3''' are time-varying. A full description is given in Huybrechts et al. (1996)<ref name="EISMINT1"/><br />
<br />
===Viewing the output===<br />
<br />
The output from these scenarios is in the form of NetCDF files (see [[Representing and manipulating data]] for a full description). The filenames are logical: running '''e1-fm.1.config''' produces an output file called '''e1-fm.1.nc''', as well a log file ('''e1-fm.1.config.log'''). You can use '''tail''' to view the most recent part of the log file, to see how the model run is progressing. Once the run has finished, you can look at the NetCDF file it produced. For a quick look at the data, the '''ncview''' tool is an excellent choice. You can bring up the data on screen with this command:<br />
<br />
ncview e1-fm.1.nc &<br />
<br />
The interface is simple: details of how to use ncview are given [[Using ncview|here]]. For more complex visualisation, different tools are necessary, such as '''python''' with '''matplotlib'''. Again, details of how to use this are given in [[Representing and manipulating data]].<br />
<br />
* Example visualization of EISMINT1-fm showing cross-sectional growth of ice sheet over time by group 6: [[Image:Eismint1 fm geom.gif]]<br />
<br />
===Finding out more===<br />
<br />
You can view the most recent version of the Glimmer documentation [http://forge.nesc.ac.uk/docman/view.php/26/369/glimmer.pdf here]. These documents are ''mostly'' up to date, but they're not perfect.<br />
<br />
===Exercises===<br />
<br />
# Compare your results with those from Huybrechts et al. (1996)<ref name="EISMINT1"/> and Rutt et al. (2009)<ref name="Rutt2009"/>. Can you suggest causes for any differences you observe?<br />
# Look at the configuration file, and make sure you understand how it defines the experiment you're running. Note that the forcing is ''not'' defined in the config file: this is specified in the '''simple_glide''' code.<br />
# Adjust the configuration file to change the resolution of one of the experiments. How well does the model converge as the resolution increase? How does the CFL criterion affect the size of the timestep you can use? How long does it take before you get bored with waiting for the model run to complete?<br />
<br />
==EISMINT 2 examples==<br />
<br />
The EISMINT 2 scenarios follow on directly from EISMINT 1, and are described in Payne et al. (2000)<ref name="EISMINT2">Payne AJ, P Huybrechts, A Abe-Ouchi, R Calov, JL Fastook, R Greve, SJ Marshall, et al. (2000) Results from the EISMINT model intercomparison: the effects of thermomechanical coupling. ''Journal of Glaciology'', '''46'''(153), 227-238[http://homepages.vub.ac.be/~phuybrec/pdf/Payne.2000.pdf]</ref>. These experiments generate idealised hypothetical ice sheets similar in scale to the first EISMINT experiments, but at double the resolution (61x61 cells at 20km). Additionally, the models are vertically resolved to deal with temperature calculations and thermomechanical coupling.<br />
<br />
There are eleven EISMINT 2 idealised scenarios that mainly consider the thermomechanical behaviour of the models, and also introduce a form of sliding. To run the scenarios, just ''cd'' back up to the main ''glimmer-tests'' directory, and then to ''EISMINT-2''. Again, these experiments are run using the ''simple_glide'' executable you built earlier. The available configuration files correspond to the experiments described in Payne at al. (2000)<ref name="EISMINT2"/>, so read this carefully before you begin. Some of the experiments require the output from previous runs to be available before starting.<br />
<br />
It takes a relatively long time to run one of these experiments so you will not be able to explore every one. However, each group should have access to enough computing resource to run a few in the time available. The instructions below tell you which ones to run<br />
<br />
===Questions and exercises===<br />
<br />
# How does the output from Scenario A compare qualitatively with the output from the EISMINT 1 moving margin scenario we looked at earlier?<br />
# How sensitive is the solution to the relationship between the timesteps of the mechanics and thermodynamics?<br />
# Several of the scenarios show interesting behaviour, due to thermomechanical feedbacks: these were the subject of a separate paper <ref name="PayneBaldwin2000">Payne AJ, and DJ Baldwin (2000) Analysis of ice-flow instabilities identified in the EISMINT intercomparison exercise. ''Ann. Glaciol.'', '''30''', 204-210 {{doi|10.3189/172756400781820534}}<br />
</ref>. Read the paper, and try and reproduce some of their results. Discuss in your groups what you're most interested in first!<br />
<br />
==Footnotes and References==<br />
<references/></div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/File:Eismint1_fm_geom.gifFile:Eismint1 fm geom.gif2009-08-12T19:06:25Z<p>Hoffman: Example visualization of EISMINT1-fm experiment using GLIMMER (by Group 6)</p>
<hr />
<div>Example visualization of EISMINT1-fm experiment using GLIMMER (by Group 6)</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Eliot_Glacier_field_tripEliot Glacier field trip2009-08-08T16:04:44Z<p>Hoffman: </p>
<hr />
<div>[http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=eliot+glacier+mount+hood+oregon&sll=37.0625,-95.677068&sspn=52.68309,55.546875&ie=UTF8&t=h&z=13&iwloc=A Eliot Glacier] is a small northeast facing glacier on Mt. Hood with a debris-covered ablation zone. Andrew Fountain's research group at Portland State has been working on the glacier for many years. [http://web.pdx.edu/~basagic/ Hassan Basagic] will lead us on a walk to the glacier via Cooper Spur with assistance from Matt Hoffman and Adam Campbell on Monday the 10th. The following are some resources you may find helpful in preparing for the trip:<br />
<br />
*Meet at 7:00 am for breakfast in Cramer Hall. <br />
*Bring a jacket, water, and sensible (thick soled) shoes.<br />
*Box lunches are provided, we will stop for dinner on the return trip (bring cash). <br />
<br />
* Spatial and morphological change on Eliot Glacier, Mount Hood, Oregon USA, Keith Jackson and Andrew Fountain, 2007, ''Annals of Glaciology'' [http://www.glaciers.pdx.edu/fountain/MyPapers/Jackson&Fountain2007_EliotGlacier.pdf pdf]<br />
<br />
* [http://glaciers.research.pdx.edu/assets/index.php?search_ass=eliot&search+assets=submit Photographs] in the PDX Glaciers image database (357 of them!).<br />
<br />
* Eliot Glacier change [http://geopulse.org/kjack/eliotphotos.php photos].<br />
<br />
* Historical glacier and climate fluctuations at Mount Hood, Oregon, Karl Lillquist and Karen Walker, 2006, ''AAAR'' [[Media:Lillquist_Walker_2006_Hood_Glacier_Fluctuations.pdf|pdf]]<br />
<br />
* Summary of Eliot Glacier research (Matt's "talk") [[Media:Eliot_comppics.pdf|pdf]]<br />
* Field trip handout [[Media:Eliot_field_trip.pdf|pdf]]</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/File:Eliot_field_trip.pdfFile:Eliot field trip.pdf2009-08-08T16:04:07Z<p>Hoffman: Field trip handout</p>
<hr />
<div>Field trip handout</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Eliot_Glacier_field_tripEliot Glacier field trip2009-08-08T16:03:46Z<p>Hoffman: </p>
<hr />
<div>[http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=eliot+glacier+mount+hood+oregon&sll=37.0625,-95.677068&sspn=52.68309,55.546875&ie=UTF8&t=h&z=13&iwloc=A Eliot Glacier] is a small northeast facing glacier on Mt. Hood with a debris-covered ablation zone. Andrew Fountain's research group at Portland State has been working on the glacier for many years. [http://web.pdx.edu/~basagic/ Hassan Basagic] will lead us on a walk to the glacier via Cooper Spur with assistance from Matt Hoffman and Adam Campbell on Monday the 10th. The following are some resources you may find helpful in preparing for the trip:<br />
<br />
*Meet at 7:00 am for breakfast in Cramer Hall. <br />
*Bring a jacket, water, and sensible (thick soled) shoes.<br />
*Box lunches are provided, we will stop for dinner on the return trip (bring cash). <br />
<br />
* Spatial and morphological change on Eliot Glacier, Mount Hood, Oregon USA, Keith Jackson and Andrew Fountain, 2007, ''Annals of Glaciology'' [http://www.glaciers.pdx.edu/fountain/MyPapers/Jackson&Fountain2007_EliotGlacier.pdf pdf]<br />
<br />
* [http://glaciers.research.pdx.edu/assets/index.php?search_ass=eliot&search+assets=submit Photographs] in the PDX Glaciers image database (357 of them!).<br />
<br />
* Eliot Glacier change [http://geopulse.org/kjack/eliotphotos.php photos].<br />
<br />
* Historical glacier and climate fluctuations at Mount Hood, Oregon, Karl Lillquist and Karen Walker, 2006, ''AAAR'' [[Media:Lillquist_Walker_2006_Hood_Glacier_Fluctuations.pdf|pdf]]<br />
<br />
* Summary of Eliot Glacier research (Matt's "talk") [[Media:Eliot_comppics.pdf|pdf]]</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/File:Eliot_comppics.pdfFile:Eliot comppics.pdf2009-08-08T16:02:16Z<p>Hoffman: Summary of research on Eliot Glacier for field trip</p>
<hr />
<div>Summary of research on Eliot Glacier for field trip</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Eliot_Glacier_field_tripEliot Glacier field trip2009-08-08T05:47:36Z<p>Hoffman: </p>
<hr />
<div>[http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=eliot+glacier+mount+hood+oregon&sll=37.0625,-95.677068&sspn=52.68309,55.546875&ie=UTF8&t=h&z=13&iwloc=A Eliot Glacier] is a small northeast facing glacier on Mt. Hood with a debris-covered ablation zone. Andrew Fountain's research group at Portland State has been working on the glacier for many years. [http://web.pdx.edu/~basagic/ Hassan Basagic] will lead us on a walk to the glacier via Cooper Spur with assistance from Matt Hoffman and Adam Campbell on Monday the 10th. The following are some resources you may find helpful in preparing for the trip:<br />
<br />
*Meet at 7:00 am for breakfast in Cramer Hall. <br />
*Bring a jacket, water, and sensible (thick soled) shoes.<br />
*Box lunches are provided, we will stop for dinner on the return trip (bring cash). <br />
<br />
* Spatial and morphological change on Eliot Glacier, Mount Hood, Oregon USA, Keith Jackson and Andrew Fountain, 2007, ''Annals of Glaciology'' [http://www.glaciers.pdx.edu/fountain/MyPapers/Jackson&Fountain2007_EliotGlacier.pdf pdf]<br />
<br />
* [http://glaciers.research.pdx.edu/assets/index.php?search_ass=eliot&search+assets=submit Photographs] in the PDX Glaciers image database (357 of them!).<br />
<br />
* Eliot Glacier change [http://geopulse.org/kjack/eliotphotos.php photos].<br />
<br />
* Historical glacier and climate fluctuations at Mount Hood, Oregon, Karl Lillquist and Karen Walker, 2006, ''AAAR'' [[Media:Lillquist_Walker_2006_Hood_Glacier_Fluctuations.pdf|pdf]]</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/File:Lillquist_Walker_2006_Hood_Glacier_Fluctuations.pdfFile:Lillquist Walker 2006 Hood Glacier Fluctuations.pdf2009-08-08T05:45:16Z<p>Hoffman: May be of interest to people attending the Summer School field trip.</p>
<hr />
<div>May be of interest to people attending the Summer School field trip.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/COMSOL_activitiesCOMSOL activities2009-08-07T23:18:53Z<p>Hoffman: /* Dynamic */</p>
<hr />
<div>==Overview==<br />
Now, let's see if COMSOL can be used to solve problems of glaciological relevance. We'll look at shallow ice approximation flow, and shallow shelf approximation flows.<br />
<br />
==Isothermal Shallow Ice Approximation==<br />
<br />
Begin with the often used shallow ice form for ice thickness evolution, which casts evolution as a non-linear diffusion problem<br />
<br />
:<math>\frac{\partial H}{\partial t} = - \nabla D \nabla H + M</math><br />
where<br />
:<math>D = \frac{2A(\rho g)^n}{n+2} H^{n+2} \left[\nabla H \cdot \nabla H \right]^{(n-1)/2}</math><br />
<br />
with boundary condition <math>H=0</math> on the edge of the computational domain.<br />
<br />
===Comsol Modeling===<br />
We will use the '''PDE, General Form''' transient mode to solve this equation. For convenience, make the dependent variable <math>H</math>.<br />
<br />
====Geometry====<br />
You should not find it difficult to create a unit square. Once it's made, you can double click it to change it's size and do other transformations. Read below to find the appropriate domain.<br />
<br />
====Field equations====<br />
This equation mode solves equations of the form<br />
:<math>e_a\frac{\partial^2 H}{\partial t^2} + d_a \frac{\partial H}{\partial t} + \nabla \cdot \Gamma = F</math><br />
<br />
Which is just what we want if we recognize that in our system <math>e_a</math>=0, <math>d_a</math>=1, <math>F</math>=<math>M</math>, and<br />
:<math>\Gamma_x = -\frac{2A(\rho g)^n}{n+2} H^{n+2} \left[\nabla H \cdot \nabla H \right]^{(n-1)/2} \frac{\partial H}{\partial x} </math><br />
<br />
:<math>\Gamma_y = -\frac{2A(\rho g)^n}{n+2} H^{n+2} \left[\nabla H \cdot \nabla H \right]^{(n-1)/2} \frac{\partial H}{\partial y} </math><br />
<br />
Now the problem has been reduced to one of typing. It will make the COMSOL model easier to read if you create a scalar expression for <math>D</math> . Then your <math>\Gamma_x = -D \frac{\partial H}{\partial x}</math> and <math>\Gamma_y =- D\frac{\partial H}{\partial y}</math> are very clear.<br />
<br />
====Boundary conditions====<br />
This type of problem requires a Dirchlet boundary condition. Set <math>H</math> = 0 on all four sides.<br />
<br />
====Other====<br />
You'll also be needing to know how to tell COMSOL to use a derivative. That is '''Hx''', '''Hy''', and '''Ht''' for <math>\frac{\partial H}{\partial x}</math>, <math>\frac{\partial H}{\partial y}</math>, and <math>\frac{\partial H}{\partial t}</math> respectively.<br />
<br />
===Exercises===<br />
#Complete the model, and do the isothermal ''fixed margin'' experiment Huybrechts (1996)<ref name="Huybrechts">Huybrechts et al. The EISMINT Benchmarks for Testing Ice--Sheet Models. Ann. Glaciol. (1996) vol. 23 pp. 1-12 [http://homepages.vub.ac.be/~phuybrec/pdf/Huyb.Ann.Glac.23.pdf pdf]</ref>. You'll find all values of constants there as well. Verify that your model is providing results consistent with those reported in the paper.<br />
#Now alter your model (the accumulation field) to do the isothermal ''moving margin'', again verify that it's at least just as wrong as the other models. You're going to have to come up with something to deal with the negative values of thickness that you'll get...<br />
<br />
==Shallow shelf approximation==<br />
===Field equations===<br />
Now, consider the equations describing a flow that is vertically integrated. The equations are<br />
:<math>\frac{\partial}{\partial x}\left ( 2 \eta H <br />
\left(2\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}\right)\right)<br />
+\frac{\partial}{\partial y}\left(\eta H\left(<br />
\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)\right)<br />
=\rho_w gH \frac{\partial s}{\partial x}<br />
</math><br />
<br />
:<math><br />
\frac{\partial}{\partial y}\left ( 2 \eta H <br />
\left(2\frac{\partial v}{\partial y}+\frac{\partial u}{\partial x}\right)\right)<br />
+\frac{\partial}{\partial x}\left(\eta H\left(<br />
\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)\right)<br />
=\rho_w gH \frac{\partial s}{\partial y}<br />
</math><br />
<br />
<math> \eta</math> is the non-linear, vertically averaged viscosity, <math>H</math> is the ice shelf thickness and <math>s</math> is the surface elevation. <math> \eta</math> will need to be entered as a '''scalar expression''', and is written<br />
<br />
:<math>\eta = \frac{B}{2}\left[ \left(\frac{\partial u}{\partial x}\right)^2 + \left(\frac{\partial v}{\partial y}\right)^2 + \frac{1}{4} \left(\frac{\partial u}{\partial x} + \frac{\partial v}{\partial y}\right)^2 + \frac{\partial u}{\partial x}\frac{\partial v}{\partial y}\right]^{-1/n}</math><br />
<br />
===Boundary conditions===<br />
There are two flavors of boundary conditions applied in the typical ice sheet model. <br />
====Kinematic====<br />
First, the Dirichlet, or ''kinematic'' boundary condition specify the velocity<br />
:<math> \mathbf{u} = \mathbf{u_b},~\forall \partial \Omega_k \in \partial \Omega.</math><br />
This is the boundary condition where the ice moving across the grounding line. In order to model an ice shelf, one determines (or estimates), and specifies that velocity.<br />
====Dynamic====<br />
This is the Neumann, or ''dynamic'' boundary condition that is applied along the ice front,<br />
<br />
:<math>- 2 \eta H \left(2\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}\right)\mathbf{n_x}<br />
<br />
-\eta H\left(<br />
\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)\mathbf{n_y}<br />
= <br />
-\rho_w g H^2\left(1-\frac{\rho_i}{\rho_w}\right) \mathbf{n_x} </math><br />
<br />
and<br />
<br />
<math>- \eta H \left(\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\right)\mathbf{n_x}<br />
<br />
-2\eta H\left(<br />
\frac{\partial u}{\partial x}+2\frac{\partial v}{\partial y}\right)\mathbf{n_y}<br />
= <br />
-\rho_w g H^2\left(1-\frac{\rho_i}{\rho_w}\right) \mathbf{n_y} </math><br />
<br />
<br />
:<math> ~\forall \partial \Omega_d \in \partial \Omega.</math><br />
<br />
Note that in case you will need to know how to tell Comsol the normal vectors. These are '''nx''', and '''ny'''.<br />
<br />
===Exercises===<br />
#As a first exercise in solving these equations, try the experiments described in the EISMINT ice shelf models, but never published [http://homepages.vub.ac.be/~phuybrec/eismint/iceshelf.html]. Get the '''self-descr.pdf''', or the first hyperlink on the page. Let's do experiments 3-4 on page 6 of the document (Note that we will see and work with the solution to these experiments again when we [[Adding a module to Glimmer I|do some exercises with the higher-order dynamics routines in Glimmer/CISM]]).<br />
# The above is neat, but ultimately not that useful because it just isn't a realistic geometry. Try out [[Media:Ross.tar.gz|this]] model, that has the geometry and boundary conditions for the Ross Ice shelf, but otherwise the same equations solved in the previous exercise. This model is based on MacAyeal et. al. (1996) <ref name="EISMINT_ROSS"> MacAyeal, D.R., V. Rommelaere, Ph. Huybrechts, C.L. Hulbe, J. Determann, and C. Ritz (1996) [http://homepages.vub.ac.be/~phuybrec/pdf/MacAyeal.Ann.Glac.23.pdf pdf]</ref>. An ice-shelf model test based on the Ross ice shelf. Annals of Glaciology 23, 46-51 See the utility of this? Is the solution dependent on the mesh? Can you do anything with the solver to improve the time required for a solution.<br />
<br />
==Full Stoke's==<br />
It is possible (even easy!) to represent the full Stoke's equations in Comsol, by starting from the fluid mechanics application mode. [[Media:arolla_tripped.zip|Here]] is a geometry for the Arolla glacier. See if you can work out the eta_nonlinear field, the appropriate boundary conditions, and the appropriate subdomain settings for full Stoke's flow. A useful reference on higher order modeling is<br />
Pattyn, F (2003) <ref Name="pattyn_03"> ''J. Geophys. Res.'', '''108''', B8, 2382, {{doi|10.1029/2002JB002329}}</ref>.<br />
<br />
==References==<br />
<references/></div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_Presentation_DevelopmentStudent Presentation Development2009-08-07T21:38:19Z<p>Hoffman: </p>
<hr />
<div>Students: Please email [[User:Mankoff|Ken Mankoff]] the Lat/Long of your institution and study area by the end of the day. (You can click 'email this user' in the lower left toolbox from his user page.<br />
<br />
==Roundtable 1: Study regions==<br />
* Antarctica. 6 students: [[User:Hoffman|Matt]], [[User:adamc,|Adam]], [[User:mankoff|Ken Mankoff]]...<br />
* Greenland. 6 students: [[User:Kpoinar|Kristin]], [[User:meierbtw|Toby]]...<br />
* Mountain glaciers. 4 students: <br />
* Other/Global. 2 students:<br />
Possible discussion questions:<br />
* What questions are the climate change community pressuring us to answer?<br />
* What do we know now that would have been a big surprise 10 years ago?<br />
* How important is field data to your research? If you could collect any field data/observations to progress your work, what would it/they be? <br />
* <br />
*<br />
...<br />
<br />
==Roundtable 2: Background==<br />
* Geology. 5 students: [[User:meierbtw,|Toby]],...<br />
* Physics. 5 students: [[User:Adamc,|Adam]], [[User:Kpoinar|Kristin]]...<br />
* Math(s). 4 students:<br />
* Engineering/CS/Other. 4 students: [[User:Hoffman|Matt]], [[User:mankoff|Ken Mankoff]], ...<br />
Possible discussion questions:<br />
*<br />
* Given your background in XX, what have you done/would you do outside of your area of expertise to make yourself a better cryosphere scientist?<br />
* Why have we come to glaciology, over the other careers / topics available to people with our background?<br />
* Glaciology is 'hot' now... how hot will it stay? What are possible exit ramps once it cools down? Will data gaps between the current and next generation of satellite sensors affect what work can be done?<br />
* What should we do immediately post-PhD? travel, stay at home institution, ship off to postdoc abroad, ...<br />
* How can I continue to deal with people who say, "Oh, you study glaciers, huh? Better hurry up, hohnk hohnk"<br />
...<br />
<br />
==Other Ways to Sort Ourselves==<br />
Seneca - notes from yesterday?<br />
<br />
<br />
==Lists of valuable skills/knowledge learned, connections made, etc.?==</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_Presentation_DevelopmentStudent Presentation Development2009-08-07T21:05:51Z<p>Hoffman: </p>
<hr />
<div>Students: Please email [[User:Mankoff|Ken Mankoff]] the Lat/Long of your institution and study area by the end of the day. (You can click 'email this user' in the lower left toolbox from his user page.<br />
<br />
==Roundtable 1: Study regions==<br />
* Antarctica. 6 students: [[User:Hoffman|Matt]], ...<br />
* Greenland. 6 students: <br />
* Mountain glaciers. 4 students: <br />
* Other/Global. 2 students:<br />
Possible discussion questions:<br />
*<br />
*<br />
...<br />
<br />
==Roundtable 2: Background==<br />
* Geology. 5 students:<br />
* Physics. 5 students:<br />
* Math(s). 4 students:<br />
* Engineering/CS/Other. 4 students: [[User:Hoffman|Matt]], ...<br />
Possible discussion questions:<br />
*<br />
*<br />
...<br />
<br />
==Other Ways to Sort Ourselves==<br />
Seneca - notes from yesterday?<br />
<br />
<br />
==Lists of valuable skills/knowledge learned, connections made, etc.?==</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_Presentation_DevelopmentStudent Presentation Development2009-08-07T21:04:39Z<p>Hoffman: </p>
<hr />
<div>Students: Please email [[User:Mankoff|Ken Mankoff]] the Lat/Long of your institution and study area by the end of the day. (You can click 'email this user' in the lower left toolbox from his user page.<br />
<br />
==Roundtable 1: Study regions==<br />
* Antarctica. 6 students: [[User:Hoffman|Matt]], ...<br />
* Greenland. 6 students: <br />
* Mountain glaciers. 4 students: <br />
* Other/Global. 2 students:<br />
Possible discussion questions:<br />
*<br />
*<br />
...<br />
<br />
==Roundtable 2: Background==<br />
* Geology. 5 students:<br />
* Physics. 5 students:<br />
* Math(s). 4 students:<br />
* Engineering/CS/Other. 4 students: [[User:Hoffman|Matt]], ...<br />
Possible discussion questions:<br />
*<br />
*<br />
...<br />
<br />
==Other Ways to Sort Ourselves==<br />
Seneca - notes from yesterday?</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_Presentation_DevelopmentStudent Presentation Development2009-08-07T21:03:13Z<p>Hoffman: </p>
<hr />
<div>Students: Please email [[User:Mankoff|Ken Mankoff]] the Lat/Long of your institution and study area by the end of the day. (You can click 'email this user' in the lower left toolbox from his user page.<br />
<br />
==Roundtable 1: Study regions==<br />
* Antarctica. 6 students: [[User:Hoffman|Matt]], ...<br />
* Greenland. 6 students: <br />
* Mountain glaciers. 4 students: <br />
* Other/Global. 2 students:<br />
Possible discussion questions:<br />
<br />
<br />
==Roundtable 2: Background==<br />
* Geology. 5 students:<br />
* Physics. 5 students:<br />
* Math(s). 4 students:<br />
* Engineering/CS/Other. 4 students: [[User:Hoffman|Matt]], ...<br />
Possible discussion questions:</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_Presentation_DevelopmentStudent Presentation Development2009-08-07T21:02:41Z<p>Hoffman: </p>
<hr />
<div>Students: Please email [[User:Mankoff|Ken Mankoff]] the Lat/Long of your institution and study area by the end of the day. (You can click 'email this user' in the lower left toolbox from his user page.<br />
<br />
Roundtable 1: Study regions<br />
* Antarctica. 6 students: [[User:Hoffman|Matt]], ...<br />
* Greenland. 6 students: <br />
* Mountain glaciers. 4 students: <br />
* Other/Global. 2 students:<br />
Possible discussion questions:<br />
<br />
<br />
Roundtable 2: Background<br />
* Geology. 5 students:<br />
* Physics. 5 students:<br />
* Math(s). 4 students:<br />
* Engineering/CS/Other. 4 students: [[User:Hoffman|Matt]], ...<br />
Possible discussion questions:</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_Presentation_DevelopmentStudent Presentation Development2009-08-07T20:58:17Z<p>Hoffman: </p>
<hr />
<div>Students: Please email [[User:Mankoff|Ken Mankoff]] the Lat/Long of your institution and study area by the end of the day. (You can click 'email user' in the lower left toolbox from his user page.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_Presentation_DevelopmentStudent Presentation Development2009-08-07T20:57:05Z<p>Hoffman: </p>
<hr />
<div>Students: Please email [[User:Mankoff|Ken Mankoff]] the Lat/Long of your institution and study area by the end of the day.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_Presentation_DevelopmentStudent Presentation Development2009-08-07T20:55:40Z<p>Hoffman: New page: Students: Please email Ken the Lat/Long of your institution and study area by the end of the day.</p>
<hr />
<div>Students: Please email Ken the Lat/Long of your institution and study area by the end of the day.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Summer_Modeling_SchoolSummer Modeling School2009-08-07T20:54:55Z<p>Hoffman: /* Student Participants */</p>
<hr />
<div>[[Image:Portland.jpg|thumb|right|400 px|The summer ice sheet modeling school will be held in Portland Oregon, August 3-14, 2009]]<br />
<br />
==Overview==<br />
The Summer Modeling School will be an intensive Summer School that will bring current and future ice-sheet scientists together to develop better models for the projection of future sea-level rise (slr). The IPCC Fourth Assessment Report [http://www.ipcc.ch/ipccreports/ar4-syr.htm] acknowledged that current models do not adequately treat the dynamic response of ice sheets to climate change, and that this is the largest uncertainty in assessing potential rapid sea-level rise. Recognizing this, an ice-sheet modelling Workshop was held during the July 2008 SCAR/IASC [https://www.comnap.aq/content/events/osc2008] meeting, in St. Petersburg, Russia. This meeting developed a community strategy on how best to (i) improve the physical understanding of ice-sheet processes responsible for rapid change; (ii) incorporate improved physical understanding into numerical models; (iii) assimilate appropriate data into the models for calibration and validation; and (iv) develop prognostic whole ice-sheet models that better incorporate non-linear ice-sheet response to environmental forcing (such as change in surface mass balance, loss of buttressing from floating ice shelves and ice tongues, and rising sea level). <br />
<br />
The two-week Summer School is a first step towards implementing this strategy. It will bring scientists from differing backgrounds together and allow more extensive and in-depth interactions between the relevant scientific research communities. A series of general background lectures as well as discussions of more specialized and advanced topics during this Summer School will provide the foundation for cross-disciplinary research, particularly for early career scientists. We anticipate publication of lecture notes both in hard copy and on a dedicated home page, to provide the glaciological community with an up-to-date overview of the science and observational techniques that will serve to guide further research efforts. Direct beneficiaries will be young researchers; indirect beneficiaries will be coastal zone communities who will gain improved sea level change forecasts to underpin their plans for sustainable development.<br />
<br />
===Venue===<br />
The modeling school will be held on the campus of [[Wikipedia:Portland State University|Portland State University]] in [[Wikipedia:Portland, Oregon|Portland, Oregon]] August 3-14, 2009.<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=Portland+Airport&daddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&geocode=&hl=en&mra=ls&dirflg=r&date=07%2F28%2F09&time=8:59am&ttype=dep&noexp=0&noal=0&sort=&tline=&sll=45.54878,-122.629155&sspn=0.092445,0.144367&ie=UTF8&ll=45.548679,-122.619438&spn=0.092445,0.144367&z=13&start=0 Map] from airport to [http://cegs.pdx.edu/stay/upl/ University Place Hotel] using public transport (note that the directions in your travel letter are better than the Google generated instructions here).<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&daddr=1721+SW+Broadway,+Portland,+OR+97201+(Cramer+Hall)&hl=en&geocode=FdVhtgIdZwqw-CHO0mMQPCwi0Q%3BFRN3tgIdvP2v-CHxCBg32xEzXA&mra=ls&dirflg=w&sll=45.51029,-122.681675&sspn=0.005782,0.009023&ie=UTF8&ll=45.510091,-122.68232&spn=0.005782,0.009023&z=17 Map] from [http://cegs.pdx.edu/stay/upl/ University Place Hotel] to [http://www.pdx.edu/campus-map Cramer Hall].<br />
<br />
=== Student Participants ===<br />
<br />
*[[Student Bios]]<br />
*[[Student Presentation Development]]<br />
*[[Groups]] example of [[connections in groups]]<br />
*[[Terminology]]<br />
*[[Questions]]<br />
<br />
===Lectures and Planned Activities===<br />
<br />
For information about editing this page, see [[Wikipedia:How to edit]].<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
|-valign="top" style="background:RoyalBlue"<br />
!width="20%"|Dates<br />
!width="25%"|Lecture Topics<br />
!width="15%"|Lecturers<br />
!width="25%"|Laboratory Topics<br />
!width="15%"|Laboratory Instructors <br />
|-valign="top" style="background:AliceBlue"<br />
| [[4-5 August]]<br />
| Introduction to and theoretical basis for ice sheet modeling. <br />
| Kees van der Veen, [[Nina Kirchner]] <br />
| [[Finite differencing|Finite differencing]] and [[Pragmatic Programming|pragmatic programming]] using Fortran[http://en.wikipedia.org/wiki/Fortran] 95...<br />
computing divergence and gradient...<br />
from conservation equation to matrix algebra...<br />
rheology and that which makes ice ice...<br />
simple, ideal models...<br />
that which makes ice-sheet modeling hard...<br />
| Gethin Williams, [[Ian Rutt]], [[Jesse Johnson]]<br />
|-valign="top" style="background:PowderBlue"<br />
| 6 August <br />
| [[Basal Conditions]], [[Data sets for ice sheet modeling]]<br />
| Alan Rempel, Slawek Tulaczyk and Ken Jezek<br />
| [[COMSOL Multiphysics]]<br />
| Olga Sergienko and Jesse Johnson<br />
|-valign="top" style="background:AliceBlue"<br />
| 7 August<br />
| The world of [[ice shelves]] and 'distributed stress-field solutions'. [[Modelling mountain glaciers]].<br />
| Todd Dupont, Olga Sergienko, and Brian Anderson<br />
| Linear Algebra of ice-sheet modeling, relaxation methods, finite-element methodology, solution of Laplace equation in arbitrary domain, creation of an ice-shelf flow-field model (snap shot of flow field), Models of the Ross Ice Shelf<br />
| Olga Sergienko and Todd Dupont<br />
<br />
|-valign="top" style="background:PowderBlue"<br />
| 8 August<br />
| [[Student Presentation]]<br />
| Modeling School Students<br />
| open work day with breakfast at 8 am & student presentation at 9 am<br />
| go to the farmer's market<br />
|-valign="top" style="background:AliceBlue"<br />
| 9 August<br />
| Free day; possible PDX tour<br />
|<br />
|<br />
|<br />
|-valign="top" style="background:PowderBlue"<br />
| 10 August<br />
| Excursion to Mt. Hood and [[Eliot Glacier field trip]] <br> Meet at 7:00 am for breakfast in Cramer Hall. Bring a jacket, water, and sensible (supportive) shoes.<br> Box lunches are provided and we will stop for dinner on the return trip (bring cash). <br />
| Guided by [http://web.pdx.edu/~basagic/ Hassan Basagic]<br />
|<br />
|<br />
|-valign="top" style="background:AliceBlue"<br />
| 11 August<br />
| [[Quantifying model uncertainty]]<br />
| Charles Jackson and Patrick Heimbach<br />
| Uncertain lab, [[Dynamic response to the enhanced basal flow in the Greenland ice sheet]] Weli Wang<br />
| Charles Jackson, Patrick Heimbach, and Weli Wang<br />
|-valign="top" style="background:PowderBlue"<br />
| [[12-13 August]]<br />
| Introduction to Glimmer-CISM ([[Introduction to Glimmer I|Part I]], [[Introduction to Glimmer II|Part II]] and [[Glimmer-CISM|Part III]]); [[Higher order velocity schemes|Higher-order models]]<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], [[Stephen Price]], Bill Lipscomb, [[Jesse Johnson]]<br />
| Software development and [[Adding a module to Glimmer I|creating a module for Glimmer]], [[representing and manipulating data]]. [[Grounding line treatments]], presented by Sophie Nowicki. [[Verifying ice sheet models]], presented by Aitbala Sargent<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], Gethin Williams, Stephen Price, Bill Lipscomb, [[Jesse Johnson]]<br />
|-valign="top" style="background:AliceBlue"<br />
| 14 August<br />
| [[Coupling the Cryosphere to other Earth systems]]<br />
| Bill Lipscomb and [[Ian Rutt]]<br />
| Community Climate System Model (CCSM) Lab<br />
| Bill Lipscomb, [[Jesse Johnson]], Stephen Price and [[Ian Rutt]]<br />
|}<br />
<br />
====[[Typical Daily Schedule]]====<br />
<br />
===Resources===<br />
<br />
Additional student/instructor resources for the Summer School:<br />
* List of [[Computing Resources and Room Description]]<br />
* Details of [[Eliot Glacier field trip]]<br />
* An outline [[Reading List]]<br />
* [[Notes]] from daily lectures<br />
* Portland [[dining and brewpub suggestions]]<br />
* [[PDX afterhours]]<br />
* [[ideas for Portland extracurricular activities]]<br />
<br />
===Application and Registration===<br />
''The window for receipt of student applications has closed. Thank you for your interest in the program. ''<br />
<br />
The registration fee for the course is US $350.<br />
<br />
===Funding Agencies===<br />
<br />
<br />
{|<br />
|-valign="top"<br />
|[[Image:iscu.jpg|300 px]]<br />
|[[Image:scar.jpg|150 px]]<br />
|-valign="top"<br />
|[[Image:wcrp.jpg|200 px]]<br />
|[[Image:nsf_logo.gif|300px]]<br />
|-valign="top"<br />
|[[Image:cresis.jpg|100 px]]<br />
|[[Image:cires.jpg|350 px]]<br />
|-valign="top"<br />
|[[Image:IASC_logo_07_RGB.jpg|100 px]]<br />
|}<br />
<br />
===Organizing Committee===<br />
Christina Hulbe, Jesse Johnson, Cornelis van der Veen</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Summer_Modeling_SchoolSummer Modeling School2009-08-07T20:54:29Z<p>Hoffman: /* Student Participants */</p>
<hr />
<div>[[Image:Portland.jpg|thumb|right|400 px|The summer ice sheet modeling school will be held in Portland Oregon, August 3-14, 2009]]<br />
<br />
==Overview==<br />
The Summer Modeling School will be an intensive Summer School that will bring current and future ice-sheet scientists together to develop better models for the projection of future sea-level rise (slr). The IPCC Fourth Assessment Report [http://www.ipcc.ch/ipccreports/ar4-syr.htm] acknowledged that current models do not adequately treat the dynamic response of ice sheets to climate change, and that this is the largest uncertainty in assessing potential rapid sea-level rise. Recognizing this, an ice-sheet modelling Workshop was held during the July 2008 SCAR/IASC [https://www.comnap.aq/content/events/osc2008] meeting, in St. Petersburg, Russia. This meeting developed a community strategy on how best to (i) improve the physical understanding of ice-sheet processes responsible for rapid change; (ii) incorporate improved physical understanding into numerical models; (iii) assimilate appropriate data into the models for calibration and validation; and (iv) develop prognostic whole ice-sheet models that better incorporate non-linear ice-sheet response to environmental forcing (such as change in surface mass balance, loss of buttressing from floating ice shelves and ice tongues, and rising sea level). <br />
<br />
The two-week Summer School is a first step towards implementing this strategy. It will bring scientists from differing backgrounds together and allow more extensive and in-depth interactions between the relevant scientific research communities. A series of general background lectures as well as discussions of more specialized and advanced topics during this Summer School will provide the foundation for cross-disciplinary research, particularly for early career scientists. We anticipate publication of lecture notes both in hard copy and on a dedicated home page, to provide the glaciological community with an up-to-date overview of the science and observational techniques that will serve to guide further research efforts. Direct beneficiaries will be young researchers; indirect beneficiaries will be coastal zone communities who will gain improved sea level change forecasts to underpin their plans for sustainable development.<br />
<br />
===Venue===<br />
The modeling school will be held on the campus of [[Wikipedia:Portland State University|Portland State University]] in [[Wikipedia:Portland, Oregon|Portland, Oregon]] August 3-14, 2009.<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=Portland+Airport&daddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&geocode=&hl=en&mra=ls&dirflg=r&date=07%2F28%2F09&time=8:59am&ttype=dep&noexp=0&noal=0&sort=&tline=&sll=45.54878,-122.629155&sspn=0.092445,0.144367&ie=UTF8&ll=45.548679,-122.619438&spn=0.092445,0.144367&z=13&start=0 Map] from airport to [http://cegs.pdx.edu/stay/upl/ University Place Hotel] using public transport (note that the directions in your travel letter are better than the Google generated instructions here).<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&daddr=1721+SW+Broadway,+Portland,+OR+97201+(Cramer+Hall)&hl=en&geocode=FdVhtgIdZwqw-CHO0mMQPCwi0Q%3BFRN3tgIdvP2v-CHxCBg32xEzXA&mra=ls&dirflg=w&sll=45.51029,-122.681675&sspn=0.005782,0.009023&ie=UTF8&ll=45.510091,-122.68232&spn=0.005782,0.009023&z=17 Map] from [http://cegs.pdx.edu/stay/upl/ University Place Hotel] to [http://www.pdx.edu/campus-map Cramer Hall].<br />
<br />
=== Student Participants ===<br />
<br />
*[[Student Bios]]<br />
*[[Student Presentation]]<br />
*[[Groups]] example of [[connections in groups]]<br />
*[[Terminology]]<br />
*[[Questions]]<br />
<br />
===Lectures and Planned Activities===<br />
<br />
For information about editing this page, see [[Wikipedia:How to edit]].<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
|-valign="top" style="background:RoyalBlue"<br />
!width="20%"|Dates<br />
!width="25%"|Lecture Topics<br />
!width="15%"|Lecturers<br />
!width="25%"|Laboratory Topics<br />
!width="15%"|Laboratory Instructors <br />
|-valign="top" style="background:AliceBlue"<br />
| [[4-5 August]]<br />
| Introduction to and theoretical basis for ice sheet modeling. <br />
| Kees van der Veen, [[Nina Kirchner]] <br />
| [[Finite differencing|Finite differencing]] and [[Pragmatic Programming|pragmatic programming]] using Fortran[http://en.wikipedia.org/wiki/Fortran] 95...<br />
computing divergence and gradient...<br />
from conservation equation to matrix algebra...<br />
rheology and that which makes ice ice...<br />
simple, ideal models...<br />
that which makes ice-sheet modeling hard...<br />
| Gethin Williams, [[Ian Rutt]], [[Jesse Johnson]]<br />
|-valign="top" style="background:PowderBlue"<br />
| 6 August <br />
| [[Basal Conditions]], [[Data sets for ice sheet modeling]]<br />
| Alan Rempel, Slawek Tulaczyk and Ken Jezek<br />
| [[COMSOL Multiphysics]]<br />
| Olga Sergienko and Jesse Johnson<br />
|-valign="top" style="background:AliceBlue"<br />
| 7 August<br />
| The world of [[ice shelves]] and 'distributed stress-field solutions'. [[Modelling mountain glaciers]].<br />
| Todd Dupont, Olga Sergienko, and Brian Anderson<br />
| Linear Algebra of ice-sheet modeling, relaxation methods, finite-element methodology, solution of Laplace equation in arbitrary domain, creation of an ice-shelf flow-field model (snap shot of flow field), Models of the Ross Ice Shelf<br />
| Olga Sergienko and Todd Dupont<br />
<br />
|-valign="top" style="background:PowderBlue"<br />
| 8 August<br />
| [[Student Presentation]]<br />
| Modeling School Students<br />
| open work day with breakfast at 8 am & student presentation at 9 am<br />
| go to the farmer's market<br />
|-valign="top" style="background:AliceBlue"<br />
| 9 August<br />
| Free day; possible PDX tour<br />
|<br />
|<br />
|<br />
|-valign="top" style="background:PowderBlue"<br />
| 10 August<br />
| Excursion to Mt. Hood and [[Eliot Glacier field trip]] <br> Meet at 7:00 am for breakfast in Cramer Hall. Bring a jacket, water, and sensible (supportive) shoes.<br> Box lunches are provided and we will stop for dinner on the return trip (bring cash). <br />
| Guided by [http://web.pdx.edu/~basagic/ Hassan Basagic]<br />
|<br />
|<br />
|-valign="top" style="background:AliceBlue"<br />
| 11 August<br />
| [[Quantifying model uncertainty]]<br />
| Charles Jackson and Patrick Heimbach<br />
| Uncertain lab, [[Dynamic response to the enhanced basal flow in the Greenland ice sheet]] Weli Wang<br />
| Charles Jackson, Patrick Heimbach, and Weli Wang<br />
|-valign="top" style="background:PowderBlue"<br />
| [[12-13 August]]<br />
| Introduction to Glimmer-CISM ([[Introduction to Glimmer I|Part I]], [[Introduction to Glimmer II|Part II]] and [[Glimmer-CISM|Part III]]); [[Higher order velocity schemes|Higher-order models]]<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], [[Stephen Price]], Bill Lipscomb, [[Jesse Johnson]]<br />
| Software development and [[Adding a module to Glimmer I|creating a module for Glimmer]], [[representing and manipulating data]]. [[Grounding line treatments]], presented by Sophie Nowicki. [[Verifying ice sheet models]], presented by Aitbala Sargent<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], Gethin Williams, Stephen Price, Bill Lipscomb, [[Jesse Johnson]]<br />
|-valign="top" style="background:AliceBlue"<br />
| 14 August<br />
| [[Coupling the Cryosphere to other Earth systems]]<br />
| Bill Lipscomb and [[Ian Rutt]]<br />
| Community Climate System Model (CCSM) Lab<br />
| Bill Lipscomb, [[Jesse Johnson]], Stephen Price and [[Ian Rutt]]<br />
|}<br />
<br />
====[[Typical Daily Schedule]]====<br />
<br />
===Resources===<br />
<br />
Additional student/instructor resources for the Summer School:<br />
* List of [[Computing Resources and Room Description]]<br />
* Details of [[Eliot Glacier field trip]]<br />
* An outline [[Reading List]]<br />
* [[Notes]] from daily lectures<br />
* Portland [[dining and brewpub suggestions]]<br />
* [[PDX afterhours]]<br />
* [[ideas for Portland extracurricular activities]]<br />
<br />
===Application and Registration===<br />
''The window for receipt of student applications has closed. Thank you for your interest in the program. ''<br />
<br />
The registration fee for the course is US $350.<br />
<br />
===Funding Agencies===<br />
<br />
<br />
{|<br />
|-valign="top"<br />
|[[Image:iscu.jpg|300 px]]<br />
|[[Image:scar.jpg|150 px]]<br />
|-valign="top"<br />
|[[Image:wcrp.jpg|200 px]]<br />
|[[Image:nsf_logo.gif|300px]]<br />
|-valign="top"<br />
|[[Image:cresis.jpg|100 px]]<br />
|[[Image:cires.jpg|350 px]]<br />
|-valign="top"<br />
|[[Image:IASC_logo_07_RGB.jpg|100 px]]<br />
|}<br />
<br />
===Organizing Committee===<br />
Christina Hulbe, Jesse Johnson, Cornelis van der Veen</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Summer_Modeling_SchoolSummer Modeling School2009-08-07T20:54:10Z<p>Hoffman: /* Student Participants */</p>
<hr />
<div>[[Image:Portland.jpg|thumb|right|400 px|The summer ice sheet modeling school will be held in Portland Oregon, August 3-14, 2009]]<br />
<br />
==Overview==<br />
The Summer Modeling School will be an intensive Summer School that will bring current and future ice-sheet scientists together to develop better models for the projection of future sea-level rise (slr). The IPCC Fourth Assessment Report [http://www.ipcc.ch/ipccreports/ar4-syr.htm] acknowledged that current models do not adequately treat the dynamic response of ice sheets to climate change, and that this is the largest uncertainty in assessing potential rapid sea-level rise. Recognizing this, an ice-sheet modelling Workshop was held during the July 2008 SCAR/IASC [https://www.comnap.aq/content/events/osc2008] meeting, in St. Petersburg, Russia. This meeting developed a community strategy on how best to (i) improve the physical understanding of ice-sheet processes responsible for rapid change; (ii) incorporate improved physical understanding into numerical models; (iii) assimilate appropriate data into the models for calibration and validation; and (iv) develop prognostic whole ice-sheet models that better incorporate non-linear ice-sheet response to environmental forcing (such as change in surface mass balance, loss of buttressing from floating ice shelves and ice tongues, and rising sea level). <br />
<br />
The two-week Summer School is a first step towards implementing this strategy. It will bring scientists from differing backgrounds together and allow more extensive and in-depth interactions between the relevant scientific research communities. A series of general background lectures as well as discussions of more specialized and advanced topics during this Summer School will provide the foundation for cross-disciplinary research, particularly for early career scientists. We anticipate publication of lecture notes both in hard copy and on a dedicated home page, to provide the glaciological community with an up-to-date overview of the science and observational techniques that will serve to guide further research efforts. Direct beneficiaries will be young researchers; indirect beneficiaries will be coastal zone communities who will gain improved sea level change forecasts to underpin their plans for sustainable development.<br />
<br />
===Venue===<br />
The modeling school will be held on the campus of [[Wikipedia:Portland State University|Portland State University]] in [[Wikipedia:Portland, Oregon|Portland, Oregon]] August 3-14, 2009.<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=Portland+Airport&daddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&geocode=&hl=en&mra=ls&dirflg=r&date=07%2F28%2F09&time=8:59am&ttype=dep&noexp=0&noal=0&sort=&tline=&sll=45.54878,-122.629155&sspn=0.092445,0.144367&ie=UTF8&ll=45.548679,-122.619438&spn=0.092445,0.144367&z=13&start=0 Map] from airport to [http://cegs.pdx.edu/stay/upl/ University Place Hotel] using public transport (note that the directions in your travel letter are better than the Google generated instructions here).<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&daddr=1721+SW+Broadway,+Portland,+OR+97201+(Cramer+Hall)&hl=en&geocode=FdVhtgIdZwqw-CHO0mMQPCwi0Q%3BFRN3tgIdvP2v-CHxCBg32xEzXA&mra=ls&dirflg=w&sll=45.51029,-122.681675&sspn=0.005782,0.009023&ie=UTF8&ll=45.510091,-122.68232&spn=0.005782,0.009023&z=17 Map] from [http://cegs.pdx.edu/stay/upl/ University Place Hotel] to [http://www.pdx.edu/campus-map Cramer Hall].<br />
<br />
=== Student Participants ===<br />
<br />
*[[Student Bios]]<br />
*[[Student Presentation Development]]<br />
*[[Groups]] example of [[connections in groups]]<br />
*[[Terminology]]<br />
*[[Questions]]<br />
<br />
===Lectures and Planned Activities===<br />
<br />
For information about editing this page, see [[Wikipedia:How to edit]].<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
|-valign="top" style="background:RoyalBlue"<br />
!width="20%"|Dates<br />
!width="25%"|Lecture Topics<br />
!width="15%"|Lecturers<br />
!width="25%"|Laboratory Topics<br />
!width="15%"|Laboratory Instructors <br />
|-valign="top" style="background:AliceBlue"<br />
| [[4-5 August]]<br />
| Introduction to and theoretical basis for ice sheet modeling. <br />
| Kees van der Veen, [[Nina Kirchner]] <br />
| [[Finite differencing|Finite differencing]] and [[Pragmatic Programming|pragmatic programming]] using Fortran[http://en.wikipedia.org/wiki/Fortran] 95...<br />
computing divergence and gradient...<br />
from conservation equation to matrix algebra...<br />
rheology and that which makes ice ice...<br />
simple, ideal models...<br />
that which makes ice-sheet modeling hard...<br />
| Gethin Williams, [[Ian Rutt]], [[Jesse Johnson]]<br />
|-valign="top" style="background:PowderBlue"<br />
| 6 August <br />
| [[Basal Conditions]], [[Data sets for ice sheet modeling]]<br />
| Alan Rempel, Slawek Tulaczyk and Ken Jezek<br />
| [[COMSOL Multiphysics]]<br />
| Olga Sergienko and Jesse Johnson<br />
|-valign="top" style="background:AliceBlue"<br />
| 7 August<br />
| The world of [[ice shelves]] and 'distributed stress-field solutions'. [[Modelling mountain glaciers]].<br />
| Todd Dupont, Olga Sergienko, and Brian Anderson<br />
| Linear Algebra of ice-sheet modeling, relaxation methods, finite-element methodology, solution of Laplace equation in arbitrary domain, creation of an ice-shelf flow-field model (snap shot of flow field), Models of the Ross Ice Shelf<br />
| Olga Sergienko and Todd Dupont<br />
<br />
|-valign="top" style="background:PowderBlue"<br />
| 8 August<br />
| [[Student Presentation]]<br />
| Modeling School Students<br />
| open work day with breakfast at 8 am & student presentation at 9 am<br />
| go to the farmer's market<br />
|-valign="top" style="background:AliceBlue"<br />
| 9 August<br />
| Free day; possible PDX tour<br />
|<br />
|<br />
|<br />
|-valign="top" style="background:PowderBlue"<br />
| 10 August<br />
| Excursion to Mt. Hood and [[Eliot Glacier field trip]] <br> Meet at 7:00 am for breakfast in Cramer Hall. Bring a jacket, water, and sensible (supportive) shoes.<br> Box lunches are provided and we will stop for dinner on the return trip (bring cash). <br />
| Guided by [http://web.pdx.edu/~basagic/ Hassan Basagic]<br />
|<br />
|<br />
|-valign="top" style="background:AliceBlue"<br />
| 11 August<br />
| [[Quantifying model uncertainty]]<br />
| Charles Jackson and Patrick Heimbach<br />
| Uncertain lab, [[Dynamic response to the enhanced basal flow in the Greenland ice sheet]] Weli Wang<br />
| Charles Jackson, Patrick Heimbach, and Weli Wang<br />
|-valign="top" style="background:PowderBlue"<br />
| [[12-13 August]]<br />
| Introduction to Glimmer-CISM ([[Introduction to Glimmer I|Part I]], [[Introduction to Glimmer II|Part II]] and [[Glimmer-CISM|Part III]]); [[Higher order velocity schemes|Higher-order models]]<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], [[Stephen Price]], Bill Lipscomb, [[Jesse Johnson]]<br />
| Software development and [[Adding a module to Glimmer I|creating a module for Glimmer]], [[representing and manipulating data]]. [[Grounding line treatments]], presented by Sophie Nowicki. [[Verifying ice sheet models]], presented by Aitbala Sargent<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], Gethin Williams, Stephen Price, Bill Lipscomb, [[Jesse Johnson]]<br />
|-valign="top" style="background:AliceBlue"<br />
| 14 August<br />
| [[Coupling the Cryosphere to other Earth systems]]<br />
| Bill Lipscomb and [[Ian Rutt]]<br />
| Community Climate System Model (CCSM) Lab<br />
| Bill Lipscomb, [[Jesse Johnson]], Stephen Price and [[Ian Rutt]]<br />
|}<br />
<br />
====[[Typical Daily Schedule]]====<br />
<br />
===Resources===<br />
<br />
Additional student/instructor resources for the Summer School:<br />
* List of [[Computing Resources and Room Description]]<br />
* Details of [[Eliot Glacier field trip]]<br />
* An outline [[Reading List]]<br />
* [[Notes]] from daily lectures<br />
* Portland [[dining and brewpub suggestions]]<br />
* [[PDX afterhours]]<br />
* [[ideas for Portland extracurricular activities]]<br />
<br />
===Application and Registration===<br />
''The window for receipt of student applications has closed. Thank you for your interest in the program. ''<br />
<br />
The registration fee for the course is US $350.<br />
<br />
===Funding Agencies===<br />
<br />
<br />
{|<br />
|-valign="top"<br />
|[[Image:iscu.jpg|300 px]]<br />
|[[Image:scar.jpg|150 px]]<br />
|-valign="top"<br />
|[[Image:wcrp.jpg|200 px]]<br />
|[[Image:nsf_logo.gif|300px]]<br />
|-valign="top"<br />
|[[Image:cresis.jpg|100 px]]<br />
|[[Image:cires.jpg|350 px]]<br />
|-valign="top"<br />
|[[Image:IASC_logo_07_RGB.jpg|100 px]]<br />
|}<br />
<br />
===Organizing Committee===<br />
Christina Hulbe, Jesse Johnson, Cornelis van der Veen</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_BiosStudent Bios2009-08-07T20:53:51Z<p>Hoffman: Student Presentation moved to Student Bios: so that we can use 'Student Presentation' for developing our presentation</p>
<hr />
<div>*[[User:Mankoff|Ken Mankoff]] will begin his PhD. this fall at UCSC and as such does not have a very well defined research topic. He will likely work on projects involving subglacial lakes and grounding lines. He is currently analyzing data from the terminal face of Pine Island Glacier, and oceanographic and sea ice data from the larger Amundsen Sea area.<br />
<br />
*[http://www.victoria.ac.nz/antarctic/people/jeremy-fyke/index.aspx Jeremy Fyke] is working on a PhD with the Antarctic Research Centre in Wellington, New Zealand. My project involves coupling an ice sheet model to an Earth System model 'of intermediate complexity' (the University of Victoria Earth System Climate Model) in order to have a go at simulating coupled climate/ice sheet interactions over millennial time scales.<br />
<br />
*[http://flo-colleoni.ifrance.com/ Florence Colleoni] will defend her Ph.D. in paleoclimate modeling at [http://www-lgge.obs.ujf-grenoble.fr/ LGGE] (Grenoble, Fr) in early September. She will then start a post-doctorate at the [http://www.cmcc.it/welcome-at-cmccs-web-site?set_language=en Centro Euro-Mediterraneo per i Cambiamenti Climatici] in Bologna (Italy) to couple the CISM Glimmer to the Earth System model composed of the AGCM of NCAR and of the OGCM NEMO. The final aim is to carry out transient paleoclimate simulations to understand and reproduce the interglacial/glacial transition mechanisms. This will be done in collaboration with NCAR. - My entire Ph.D. thesis is available [ftp://ftp-lgge.obs.ujf-grenoble.fr/pub/depot/florence/ here]-<br />
<br />
* [http://homepages.ucalgary.ca/~adhikars/ Surendra Adhikari] is currently in his second year of PhD at the University of Calgary, Canada. He is trying to develop a 3-D higher-order numerical ice-flow model applied to valley glaciers and alpine ice-fields. This HO-model will then be coupled to the traditional SIA-model to simulate the large ice sheets such as Greenland Ice Sheet.<br />
<br />
*[[User:Kpoinar|Kristin Poinar]] is a second-year Ph.D. student at the University of Washington who is working on two "learning curve" ice sheet modelling projects. One is writing a thermal model to apply to the Greenland ice sheet, where surface lake drainages make basal thermodynamics interesting; the second is your standard model-perturbations-at-the-terminus study, on Petermann Glacier in NW Greenland.<br />
<br />
*[[User:adamc|Adam Campbell]] is entering a PhD program at the University of Washington in Fall 2009. I have just completed a Masters Degree in Geology at Portland State University where I examined the physics of the reaction of Crane Glacier to the disintegration of the Larsen B Ice Shelf using a steady state 2-D flow model with a basal sliding law. I am presently investigating structures on the Kamb Ice Shelf to determine if they were developed by a pinch and swell mechanism. I am also uncomfortable writing about myself in the third person.<br />
<br />
*[[User:papplega|Patrick Applegate]]: I am a glacial geomorphologist and geochronologist with a taste for modeling. My Ph. D. work involves the use of geomorphic process modeling to parse out the real meaning of cosmogenic exposure dates from moraines. I am asymptotically approaching the completion of my Ph. D. at Penn State. I'm attending the Summer School because I anticipate taking a new direction for my research in the near future.<br />
<br />
*[[User:hoffman|Matt Hoffman]] is in his fifth and final (?) year of a PhD at Portland State University. I am developing a spatially-distributed energy balance model for the glaciers of the McMurdo Dry Valleys, Antarctica. The glaciers of the Dry Valleys are near the threshold of melt during summer, such that sublimation and melt are of similar magnitude. I anticipate the Summer School will develop my skills as a modeler and help me think about the relationships between surface mass balance and ice dynamics.<br />
<br />
*[[User:Dlindsey|Daniel Seneca Lindsey]]: I am beginning my second year in the department of Earth System Science at the University of California Irvine. I am primarily interested in modeling ice dynamics for Greenland and Antarctica. I have dabbled in subglacial hydrology and finding a basal friction field by inverting surface ice velocities. I am currently working on a model which applies the numerical level set method to track an ice-shelf ice/ocean interface through time.<br />
<br />
*[[Toby Meierbachtol]]: I am beginning my PhD at The University of Montana this fall. While still in the beginning stages, my research will likely be focused on the subglacial hydrology of the Greenland ice sheet and controls on sliding through direct borehole observations. Additionally, I anticipate a modeling component to my research that could include incorporating field findings to constrain model results, or investigating uncertainties in boundary conditions. The Summer School is a great way for me to jump in with both feet.<br />
<br />
*[http://www.civil.uwaterloo.ca/our_people/dept_person.asp?id=sdnorman Stefano Normani]: I am a Civil Engineer and recently completed my PhD in the Department of Civil and Environmental Engineering at the University of Waterloo, Canada. My PhD work focused on the movement of pore fluids in deep subsurface environments, both in crystalline and sedimentary rock, which are affected by continental ice-sheets. I have a strong background in the modeling of flow and transport processes in fractured and porous media, and I'm attending this Summer School to gain a broader and deeper understanding of the physics and modeling of ice-sheets.<br />
<br />
*[[User:mcgovej|Jonathan McGovern]]: I am finishing my first year of PhD in Swansea University, UK. The project investigates the sensitivity of Greenland ice sheet models. This will involve doing geometry sensitivity tests with the Glimmer model with respect to basal boundary conditions in particular. I have written a simple EISMINT program code. Depending on feasibility and practicality, the project will involve either using the adjoint model or more likely running ensembles.<br />
<br />
*[[Doug Brinkerhoff]]: I will (with any luck) be beginning my work towards an MS beginning in January of 2010. This work will most likely be centered around improving approximations of fluxes in basal hydrology through the incorporation of empirically derived flow relationships between substrate and velocity into an ice sheet model. My background is in fluvial geomorphology, and my experience in modeling is limited to the last year; this said, I look forward to the opportunity to participate in an intensive course such as this, and improve my skills in the various topics covered in the course.<br />
<br />
*[http://www.vaw.ethz.ch/people/gz/werderm Mauro Werder]: I just finished my PhD on the jökulhlaups (glacier lake outburst floods) of Gornersee, an ice marginal lake on Gornergletscher, Switzerland. I studied the evolution of the glacial drainage system prior, during and after the outburst with tracer experiments, measurements of subglacial water pressure, proglacial and lake discharge. I simulated the measured tracer transit speeds with existing and new hydraulic models. At the beginning of next year I'll start a postdoc at Simon Fraser University, Vancouver, where I am planning to develop a new hydraulic model of the glacial drainage system to simulate seasonal and daily evolution as well as jökulhlaups.<br />
<br />
* [[Yuanxiang Wang]]: I just finished my Ph.D in Chinese Academy of Meteorology Sciences. My major is to study the effect of climate on glacier in the Tibetan Plateau using a model under different climate drivers. I have studied some for present glacier,the ice sheet at LGM and little ice age, and predict the variability of glaciers in this century SRES climate scenarios.I hope to further study the model GLIMMER to simulate the mountain glaciers, especially its dynamic feature.<br />
<br />
*[[Fiona Seifert]]: I will begin working on an MS at Portland State University this fall, after completing degrees in Geology and in Math. I will be working on a problem involving the groundling line region of Kamb Ice Stream in West Antarctica.<br />
<br />
*[[Saffia Hossainzadeh]] I will begin my Ph.D at UC Santa Cruz in a yet to be determined project involving ice sheet dynamics. I've studied ice sheet modeling processes and techniques for two summers during my undergraduate time studying physics at the University of Chicago. My glaciology experience has also included going to the deep field on the Whillans Ice Stream to help conduct active and passive seismic experiments with Dr. Slawek Tulaczyk, Dr. John Woodward, and Jake Walter.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_PresentationStudent Presentation2009-08-07T20:53:51Z<p>Hoffman: Student Presentation moved to Student Bios: so that we can use 'Student Presentation' for developing our presentation</p>
<hr />
<div>#REDIRECT [[Student Bios]]</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Summer_Modeling_SchoolSummer Modeling School2009-08-07T20:53:11Z<p>Hoffman: /* Student Participants */</p>
<hr />
<div>[[Image:Portland.jpg|thumb|right|400 px|The summer ice sheet modeling school will be held in Portland Oregon, August 3-14, 2009]]<br />
<br />
==Overview==<br />
The Summer Modeling School will be an intensive Summer School that will bring current and future ice-sheet scientists together to develop better models for the projection of future sea-level rise (slr). The IPCC Fourth Assessment Report [http://www.ipcc.ch/ipccreports/ar4-syr.htm] acknowledged that current models do not adequately treat the dynamic response of ice sheets to climate change, and that this is the largest uncertainty in assessing potential rapid sea-level rise. Recognizing this, an ice-sheet modelling Workshop was held during the July 2008 SCAR/IASC [https://www.comnap.aq/content/events/osc2008] meeting, in St. Petersburg, Russia. This meeting developed a community strategy on how best to (i) improve the physical understanding of ice-sheet processes responsible for rapid change; (ii) incorporate improved physical understanding into numerical models; (iii) assimilate appropriate data into the models for calibration and validation; and (iv) develop prognostic whole ice-sheet models that better incorporate non-linear ice-sheet response to environmental forcing (such as change in surface mass balance, loss of buttressing from floating ice shelves and ice tongues, and rising sea level). <br />
<br />
The two-week Summer School is a first step towards implementing this strategy. It will bring scientists from differing backgrounds together and allow more extensive and in-depth interactions between the relevant scientific research communities. A series of general background lectures as well as discussions of more specialized and advanced topics during this Summer School will provide the foundation for cross-disciplinary research, particularly for early career scientists. We anticipate publication of lecture notes both in hard copy and on a dedicated home page, to provide the glaciological community with an up-to-date overview of the science and observational techniques that will serve to guide further research efforts. Direct beneficiaries will be young researchers; indirect beneficiaries will be coastal zone communities who will gain improved sea level change forecasts to underpin their plans for sustainable development.<br />
<br />
===Venue===<br />
The modeling school will be held on the campus of [[Wikipedia:Portland State University|Portland State University]] in [[Wikipedia:Portland, Oregon|Portland, Oregon]] August 3-14, 2009.<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=Portland+Airport&daddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&geocode=&hl=en&mra=ls&dirflg=r&date=07%2F28%2F09&time=8:59am&ttype=dep&noexp=0&noal=0&sort=&tline=&sll=45.54878,-122.629155&sspn=0.092445,0.144367&ie=UTF8&ll=45.548679,-122.619438&spn=0.092445,0.144367&z=13&start=0 Map] from airport to [http://cegs.pdx.edu/stay/upl/ University Place Hotel] using public transport (note that the directions in your travel letter are better than the Google generated instructions here).<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&daddr=1721+SW+Broadway,+Portland,+OR+97201+(Cramer+Hall)&hl=en&geocode=FdVhtgIdZwqw-CHO0mMQPCwi0Q%3BFRN3tgIdvP2v-CHxCBg32xEzXA&mra=ls&dirflg=w&sll=45.51029,-122.681675&sspn=0.005782,0.009023&ie=UTF8&ll=45.510091,-122.68232&spn=0.005782,0.009023&z=17 Map] from [http://cegs.pdx.edu/stay/upl/ University Place Hotel] to [http://www.pdx.edu/campus-map Cramer Hall].<br />
<br />
=== Student Participants ===<br />
<br />
*[[Student Presentation]]<br />
*[[Student Presentation Development]]<br />
*[[Groups]] example of [[connections in groups]]<br />
*[[Terminology]]<br />
*[[Questions]]<br />
<br />
===Lectures and Planned Activities===<br />
<br />
For information about editing this page, see [[Wikipedia:How to edit]].<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
|-valign="top" style="background:RoyalBlue"<br />
!width="20%"|Dates<br />
!width="25%"|Lecture Topics<br />
!width="15%"|Lecturers<br />
!width="25%"|Laboratory Topics<br />
!width="15%"|Laboratory Instructors <br />
|-valign="top" style="background:AliceBlue"<br />
| [[4-5 August]]<br />
| Introduction to and theoretical basis for ice sheet modeling. <br />
| Kees van der Veen, [[Nina Kirchner]] <br />
| [[Finite differencing|Finite differencing]] and [[Pragmatic Programming|pragmatic programming]] using Fortran[http://en.wikipedia.org/wiki/Fortran] 95...<br />
computing divergence and gradient...<br />
from conservation equation to matrix algebra...<br />
rheology and that which makes ice ice...<br />
simple, ideal models...<br />
that which makes ice-sheet modeling hard...<br />
| Gethin Williams, [[Ian Rutt]], [[Jesse Johnson]]<br />
|-valign="top" style="background:PowderBlue"<br />
| 6 August <br />
| [[Basal Conditions]], [[Data sets for ice sheet modeling]]<br />
| Alan Rempel, Slawek Tulaczyk and Ken Jezek<br />
| [[COMSOL Multiphysics]]<br />
| Olga Sergienko and Jesse Johnson<br />
|-valign="top" style="background:AliceBlue"<br />
| 7 August<br />
| The world of [[ice shelves]] and 'distributed stress-field solutions'. [[Modelling mountain glaciers]].<br />
| Todd Dupont, Olga Sergienko, and Brian Anderson<br />
| Linear Algebra of ice-sheet modeling, relaxation methods, finite-element methodology, solution of Laplace equation in arbitrary domain, creation of an ice-shelf flow-field model (snap shot of flow field), Models of the Ross Ice Shelf<br />
| Olga Sergienko and Todd Dupont<br />
<br />
|-valign="top" style="background:PowderBlue"<br />
| 8 August<br />
| [[Student Presentation]]<br />
| Modeling School Students<br />
| open work day with breakfast at 8 am & student presentation at 9 am<br />
| go to the farmer's market<br />
|-valign="top" style="background:AliceBlue"<br />
| 9 August<br />
| Free day; possible PDX tour<br />
|<br />
|<br />
|<br />
|-valign="top" style="background:PowderBlue"<br />
| 10 August<br />
| Excursion to Mt. Hood and [[Eliot Glacier field trip]] <br> Meet at 7:00 am for breakfast in Cramer Hall. Bring a jacket, water, and sensible (supportive) shoes.<br> Box lunches are provided and we will stop for dinner on the return trip (bring cash). <br />
| Guided by [http://web.pdx.edu/~basagic/ Hassan Basagic]<br />
|<br />
|<br />
|-valign="top" style="background:AliceBlue"<br />
| 11 August<br />
| [[Quantifying model uncertainty]]<br />
| Charles Jackson and Patrick Heimbach<br />
| Uncertain lab, [[Dynamic response to the enhanced basal flow in the Greenland ice sheet]] Weli Wang<br />
| Charles Jackson, Patrick Heimbach, and Weli Wang<br />
|-valign="top" style="background:PowderBlue"<br />
| [[12-13 August]]<br />
| Introduction to Glimmer-CISM ([[Introduction to Glimmer I|Part I]], [[Introduction to Glimmer II|Part II]] and [[Glimmer-CISM|Part III]]); [[Higher order velocity schemes|Higher-order models]]<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], [[Stephen Price]], Bill Lipscomb, [[Jesse Johnson]]<br />
| Software development and [[Adding a module to Glimmer I|creating a module for Glimmer]], [[representing and manipulating data]]. [[Grounding line treatments]], presented by Sophie Nowicki. [[Verifying ice sheet models]], presented by Aitbala Sargent<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], Gethin Williams, Stephen Price, Bill Lipscomb, [[Jesse Johnson]]<br />
|-valign="top" style="background:AliceBlue"<br />
| 14 August<br />
| [[Coupling the Cryosphere to other Earth systems]]<br />
| Bill Lipscomb and [[Ian Rutt]]<br />
| Community Climate System Model (CCSM) Lab<br />
| Bill Lipscomb, [[Jesse Johnson]], Stephen Price and [[Ian Rutt]]<br />
|}<br />
<br />
====[[Typical Daily Schedule]]====<br />
<br />
===Resources===<br />
<br />
Additional student/instructor resources for the Summer School:<br />
* List of [[Computing Resources and Room Description]]<br />
* Details of [[Eliot Glacier field trip]]<br />
* An outline [[Reading List]]<br />
* [[Notes]] from daily lectures<br />
* Portland [[dining and brewpub suggestions]]<br />
* [[PDX afterhours]]<br />
* [[ideas for Portland extracurricular activities]]<br />
<br />
===Application and Registration===<br />
''The window for receipt of student applications has closed. Thank you for your interest in the program. ''<br />
<br />
The registration fee for the course is US $350.<br />
<br />
===Funding Agencies===<br />
<br />
<br />
{|<br />
|-valign="top"<br />
|[[Image:iscu.jpg|300 px]]<br />
|[[Image:scar.jpg|150 px]]<br />
|-valign="top"<br />
|[[Image:wcrp.jpg|200 px]]<br />
|[[Image:nsf_logo.gif|300px]]<br />
|-valign="top"<br />
|[[Image:cresis.jpg|100 px]]<br />
|[[Image:cires.jpg|350 px]]<br />
|-valign="top"<br />
|[[Image:IASC_logo_07_RGB.jpg|100 px]]<br />
|}<br />
<br />
===Organizing Committee===<br />
Christina Hulbe, Jesse Johnson, Cornelis van der Veen</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Team_6_SolutionTeam 6 Solution2009-08-06T21:34:52Z<p>Hoffman: </p>
<hr />
<div><source lang="fortran"><br />
<br />
!> 1D Convection Diffusion equations solver in Fortran<br />
!!<br />
!! Solves the equation:<br />
!!<br />
!!\f[<br />
!!\frac{du}{dt}=\frac{d}{dx}D(x)\frac{du}{dx} + C(x)\frac{du}{dx}+F(x)u-S(x)<br />
!!\f]<br />
!! for \f$u\f$, given functions for \f$D\f$, \f$C\f$, \f$F\f$, and \f$S\f$, defined in this program<br />
!!<br />
!! Explicit methods are used<br />
!!<br />
!! \author Matt & Erin & Sophie (jvj)<br />
!! \date 8-5-09<br />
<br />
program OurCode<br />
<br />
<br />
implicit none<br />
<br />
! local variables<br />
<br />
integer :: nx ! Number of nodes<br />
real, parameter :: dt = 0.1 ! length time step (years)<br />
integer, parameter :: nt = 10000 ! number of time steps<br />
integer :: t ! current time step<br />
real :: xl ! start of domain<br />
real :: xr ! end of domain (m)<br />
real :: Const ! (2*A)*(rho*grav)^n/(n+2)<br />
real, parameter :: dx = 1000 ! node spacing (m)<br />
real, parameter :: dbdx = -0.1 ! bedslope (m/m)<br />
real, parameter :: g = 9.8 ! gravity (m/s2)<br />
real, parameter :: rho = 917 ! density of ice (g/cm3)<br />
real, parameter :: A = 1e-16 ! Glen rate factor (kPa-3 a-1)<br />
real, parameter :: n = 3 ! Glen Flow Exponent (unitless)<br />
real, parameter :: M0 = 4.0 ! m/yr<br />
real, parameter :: M1 = 2.0/10000.0 ! m/yr/m<br />
<br />
real, dimension(:), allocatable :: elev ! surface elevation (m)<br />
real, dimension(:), allocatable :: bedelev ! bed elevation (m), y origin is at the bed elev in the left of the domain. up is up!<br />
real, dimension(:), allocatable :: H ! thicknesss (m)<br />
real, dimension(:), allocatable :: Mb ! Mass Balance (m/yr)<br />
real, dimension(:), allocatable :: dhdt_store ! space to store du/dx<br />
real, dimension(:), allocatable :: xref ! refence distance<br />
<br />
real, dimension(:), allocatable :: d ! diffusivity coeff<br />
real, dimension(:), allocatable :: mflux ! mass flux between grid points<br />
<br />
integer :: ii ! a counter<br />
integer :: jj <br />
integer :: errstat ! for error checking<br />
<br />
<br />
! Set up grid <br />
! Space<br />
xl = 0.0<br />
xr = 60000.0 !(m) our guess at how large of a domain we need (started with 60km)<br />
nx = int( ((xr - xl) / dx) +1 )<br />
<br />
! let's allocate some memory<br />
allocate(elev(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate elev")<br />
<br />
allocate(xref(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate xref")<br />
<br />
allocate(bedelev(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate bedelev")<br />
<br />
allocate(H(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate H")<br />
<br />
allocate(Mb(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate Mb")<br />
<br />
allocate(d(nx-1),stat=errstat)<br />
call checkerr(errstat,"failed to allocate d")<br />
<br />
allocate(mflux(nx-1),stat=errstat)<br />
call checkerr(errstat,"failed to allocate mflux")<br />
<br />
allocate(dhdt_store(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate dhdt_store")<br />
<br />
<br />
!We could've done this as a function, Matt and Erin revolted vs. Sophie. Said No.<br />
do ii=1,nx <br />
bedelev(ii) = dx*dbdx*real(ii-1) !reminder, at x = 0, bedelev = 0 m<br />
Mb(ii) = M0 - M1*(dx*real(ii-1)) !Mass balance equation (m/yr)<br />
xref(ii)= real(ii-1)*dx<br />
enddo<br />
<br />
!Constant C<br />
Const = (2.0*A)/real(n+2)*(rho*g)**n<br />
<br />
!Initial conditions<br />
H = 0.0 !set thickness everywhere in x as 0.0 m<br />
elev = bedelev + H !No glacier yet<br />
<br />
time_loop: do t=1,nt<br />
<br />
spatial_midpoint_loop: do ii=1,nx-1 <br />
!Calculate flux midway between elevation points<br />
d(ii) = Const * ((H(ii)+H(ii+1))/2.0)**(n+2) * ((elev(ii+1)-elev(ii))/(dx))**(n-1)<br />
mflux(ii)= -d(ii) * (elev(ii+1)-elev(ii) )/dx<br />
enddo spatial_midpoint_loop<br />
<br />
H(1) = 0.0 ! left boundary condition (this could be moved above time_loop)<br />
<br />
spatial_gridpoint_loop: do jj=2,nx-1 <br />
dhdt_store(jj) = - (mflux(jj)-mflux(jj-1))/dx + Mb(jj)<br />
H(jj) = H(jj) + dhdt_store(jj) * dt !new thickness<br />
!search for terminus location... <br />
if (H(jj)<0) then<br />
H(jj) = 0<br />
!write (*,*) jj, (jj-1)*dx<br />
endif<br />
enddo spatial_gridpoint_loop<br />
<br />
<br />
!to update 'elev'<br />
elev = bedelev + H<br />
<br />
!output geometry (also need to output geom at t=0 ?)<br />
if (mod(t,10)==0) then<br />
write (*,*) elev<br />
endif<br />
end do time_loop<br />
<br />
contains <br />
<br />
subroutine checkerr(errstat,msg)<br />
implicit none<br />
integer, intent(in) :: errstat<br />
character(*), intent(in) :: msg <br />
if (errstat /= 0) then<br />
write(*,*) "ERROR:", msg<br />
stop<br />
end if<br />
end subroutine checkerr<br />
<br />
<br />
end program OurCode<br />
<br />
<br />
<br />
</source></div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Summer_Modeling_SchoolSummer Modeling School2009-08-06T19:03:05Z<p>Hoffman: /* Lectures and Planned Activities */</p>
<hr />
<div>[[Image:Portland.jpg|thumb|right|400 px|The summer ice sheet modeling school will be held in Portland Oregon, August 3-14, 2009]]<br />
<br />
==Overview==<br />
The Summer Modeling School will be an intensive Summer School that will bring current and future ice-sheet scientists together to develop better models for the projection of future sea-level rise (slr). The IPCC Fourth Assessment Report [http://www.ipcc.ch/ipccreports/ar4-syr.htm] acknowledged that current models do not adequately treat the dynamic response of ice sheets to climate change, and that this is the largest uncertainty in assessing potential rapid sea-level rise. Recognizing this, an ice-sheet modelling Workshop was held during the July 2008 SCAR/IASC [https://www.comnap.aq/content/events/osc2008] meeting, in St. Petersburg, Russia. This meeting developed a community strategy on how best to (i) improve the physical understanding of ice-sheet processes responsible for rapid change; (ii) incorporate improved physical understanding into numerical models; (iii) assimilate appropriate data into the models for calibration and validation; and (iv) develop prognostic whole ice-sheet models that better incorporate non-linear ice-sheet response to environmental forcing (such as change in surface mass balance, loss of buttressing from floating ice shelves and ice tongues, and rising sea level). <br />
<br />
The two-week Summer School is a first step towards implementing this strategy. It will bring scientists from differing backgrounds together and allow more extensive and in-depth interactions between the relevant scientific research communities. A series of general background lectures as well as discussions of more specialized and advanced topics during this Summer School will provide the foundation for cross-disciplinary research, particularly for early career scientists. We anticipate publication of lecture notes both in hard copy and on a dedicated home page, to provide the glaciological community with an up-to-date overview of the science and observational techniques that will serve to guide further research efforts. Direct beneficiaries will be young researchers; indirect beneficiaries will be coastal zone communities who will gain improved sea level change forecasts to underpin their plans for sustainable development.<br />
<br />
===Venue===<br />
The modeling school will be held on the campus of [[Wikipedia:Portland State University|Portland State University]] in [[Wikipedia:Portland, Oregon|Portland, Oregon]] August 3-14, 2009.<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=Portland+Airport&daddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&geocode=&hl=en&mra=ls&dirflg=r&date=07%2F28%2F09&time=8:59am&ttype=dep&noexp=0&noal=0&sort=&tline=&sll=45.54878,-122.629155&sspn=0.092445,0.144367&ie=UTF8&ll=45.548679,-122.619438&spn=0.092445,0.144367&z=13&start=0 Map] from airport to [http://cegs.pdx.edu/stay/upl/ University Place Hotel] using public transport (note that the directions in your travel letter are better than the Google generated instructions here).<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&daddr=1721+SW+Broadway,+Portland,+OR+97201+(Cramer+Hall)&hl=en&geocode=FdVhtgIdZwqw-CHO0mMQPCwi0Q%3BFRN3tgIdvP2v-CHxCBg32xEzXA&mra=ls&dirflg=w&sll=45.51029,-122.681675&sspn=0.005782,0.009023&ie=UTF8&ll=45.510091,-122.68232&spn=0.005782,0.009023&z=17 Map] from [http://cegs.pdx.edu/stay/upl/ University Place Hotel] to [http://www.pdx.edu/campus-map Cramer Hall].<br />
<br />
=== Student Participants ===<br />
<br />
*[[Student Presentation]]<br />
*[[Groups]] example of [[connections in groups]]<br />
*[[Terminology]]<br />
*[[Questions]]<br />
<br />
===Lectures and Planned Activities===<br />
<br />
For information about editing this page, see [[Wikipedia:How to edit]].<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
|-valign="top" style="background:RoyalBlue"<br />
!width="20%"|Dates<br />
!width="25%"|Lecture Topics<br />
!width="15%"|Lecturers<br />
!width="25%"|Laboratory Topics<br />
!width="15%"|Laboratory Instructors <br />
|-valign="top" style="background:AliceBlue"<br />
| [[4-5 August]]<br />
| Introduction to and theoretical basis for ice sheet modeling. <br />
| Kees van der Veen, [[Nina Kirchner]] <br />
| [[Finite differencing|Finite differencing]] and [[Pragmatic Programming|pragmatic programming]] using Fortran[http://en.wikipedia.org/wiki/Fortran] 95...<br />
computing divergence and gradient...<br />
from conservation equation to matrix algebra...<br />
rheology and that which makes ice ice...<br />
simple, ideal models...<br />
that which makes ice-sheet modeling hard...<br />
| Gethin Williams, [[Ian Rutt]], [[Jesse Johnson]]<br />
|-valign="top" style="background:PowderBlue"<br />
| 6 August <br />
| [[Basal Conditions]], [[Data sets for ice sheet modeling]]<br />
| Alan Rempel, Slawek Tulaczyk and Ken Jezek<br />
| COMSOL Multiphysics<br />
| Olga Sergienko and Jesse Johnson<br />
|-valign="top" style="background:AliceBlue"<br />
| 7 August<br />
| The world of [[ice shelves]] and 'distributed stress-field solutions'.[[Modelling mountain glaciers]].<br />
| Todd Dupont, Olga Sergienko, and Brian Anderson<br />
| Linear Algebra of ice-sheet modeling, relaxation methods, finite-element methodology, solution of Laplace equation in arbitrary domain, creation of an ice-shelf flow-field model (snap shot of flow field), Models of the Ross Ice Shelf<br />
| Olga Sergienko and Todd Dupont<br />
<br />
|-valign="top" style="background:PowderBlue"<br />
| 8 August<br />
| [[Student Presentation]]<br />
| Modeling School Students<br />
| open work day with breakfast & presentations in the morning<br />
| go to the farmer's market<br />
|-valign="top" style="background:AliceBlue"<br />
| 9 August<br />
| Free day; possible PDX tour<br />
|<br />
|<br />
|<br />
|-valign="top" style="background:PowderBlue"<br />
| 10 August<br />
| Excursion to Mt. Hood and [[Eliot Glacier field trip]]<br />
| Guided by [http://web.pdx.edu/~basagic/ Hassan Basagic]<br />
|<br />
|<br />
|-valign="top" style="background:AliceBlue"<br />
| 11 August<br />
| [[Quantifying model uncertainty]]<br />
| Charles Jackson and Patrick Heimbach<br />
| Uncertain lab, [[Dynamic response to the enhanced basal flow in the Greenland ice sheet]] Weli Wang<br />
| Charles Jackson, Patrick Heimbach, and Weli Wang<br />
|-valign="top" style="background:PowderBlue"<br />
| [[12-13 August]]<br />
| Introduction to Glimmer-CISM ([[Introduction to Glimmer I|Part I]], [[Introduction to Glimmer II|Part II]] and [[Glimmer-CISM|Part III]]); [[Higher order velocity schemes|Higher-order models]]<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], [[Stephen Price]], Bill Lipscomb, [[Jesse Johnson]]<br />
| Software development and [[Adding a module to Glimmer I|creating a module for Glimmer]], [[representing and manipulating data]]. [[Grounding line treatments]], presented by Sophie Nowicki. [[Verifying ice sheet models]], presented by Aitbala Sargent<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], Gethin Williams, Stephen Price, Bill Lipscomb, [[Jesse Johnson]]<br />
|-valign="top" style="background:AliceBlue"<br />
| 14 August<br />
| [[Coupling the Cryosphere to other Earth systems]]<br />
| Bill Lipscomb and [[Ian Rutt]]<br />
| Community Climate System Model (CCSM) Lab<br />
| Bill Lipscomb, [[Jesse Johnson]], Stephen Price and [[Ian Rutt]]<br />
|}<br />
<br />
====[[Typical Daily Schedule]]====<br />
<br />
===Resources===<br />
<br />
Additional student/instructor resources for the Summer School:<br />
* List of [[Computing Resources and Room Description]]<br />
* Details of [[Eliot Glacier field trip]]<br />
* An outline [[Reading List]]<br />
* [[Notes]] from daily lectures<br />
* Portland [[dining and brewpub suggestions]]<br />
* [[PDX afterhours]]<br />
<br />
===Application and Registration===<br />
''The window for receipt of student applications has closed. Thank you for your interest in the program. ''<br />
<br />
The registration fee for the course is US $350.<br />
<br />
===Funding Agencies===<br />
<br />
<br />
{|<br />
|-valign="top"<br />
|[[Image:iscu.jpg|300 px]]<br />
|[[Image:scar.jpg|150 px]]<br />
|-valign="top"<br />
|[[Image:wcrp.jpg|200 px]]<br />
|[[Image:nsf_logo.gif|300px]]<br />
|-valign="top"<br />
|[[Image:cresis.jpg|100 px]]<br />
|[[Image:cires.jpg|350 px]]<br />
|-valign="top"<br />
|[[Image:IASC_logo_07_RGB.jpg|100 px]]<br />
|}<br />
<br />
===Organizing Committee===<br />
Christina Hulbe, Jesse Johnson, Cornelis van der Veen</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Summer_Modeling_SchoolSummer Modeling School2009-08-06T19:02:32Z<p>Hoffman: /* Lectures and Planned Activities */</p>
<hr />
<div>[[Image:Portland.jpg|thumb|right|400 px|The summer ice sheet modeling school will be held in Portland Oregon, August 3-14, 2009]]<br />
<br />
==Overview==<br />
The Summer Modeling School will be an intensive Summer School that will bring current and future ice-sheet scientists together to develop better models for the projection of future sea-level rise (slr). The IPCC Fourth Assessment Report [http://www.ipcc.ch/ipccreports/ar4-syr.htm] acknowledged that current models do not adequately treat the dynamic response of ice sheets to climate change, and that this is the largest uncertainty in assessing potential rapid sea-level rise. Recognizing this, an ice-sheet modelling Workshop was held during the July 2008 SCAR/IASC [https://www.comnap.aq/content/events/osc2008] meeting, in St. Petersburg, Russia. This meeting developed a community strategy on how best to (i) improve the physical understanding of ice-sheet processes responsible for rapid change; (ii) incorporate improved physical understanding into numerical models; (iii) assimilate appropriate data into the models for calibration and validation; and (iv) develop prognostic whole ice-sheet models that better incorporate non-linear ice-sheet response to environmental forcing (such as change in surface mass balance, loss of buttressing from floating ice shelves and ice tongues, and rising sea level). <br />
<br />
The two-week Summer School is a first step towards implementing this strategy. It will bring scientists from differing backgrounds together and allow more extensive and in-depth interactions between the relevant scientific research communities. A series of general background lectures as well as discussions of more specialized and advanced topics during this Summer School will provide the foundation for cross-disciplinary research, particularly for early career scientists. We anticipate publication of lecture notes both in hard copy and on a dedicated home page, to provide the glaciological community with an up-to-date overview of the science and observational techniques that will serve to guide further research efforts. Direct beneficiaries will be young researchers; indirect beneficiaries will be coastal zone communities who will gain improved sea level change forecasts to underpin their plans for sustainable development.<br />
<br />
===Venue===<br />
The modeling school will be held on the campus of [[Wikipedia:Portland State University|Portland State University]] in [[Wikipedia:Portland, Oregon|Portland, Oregon]] August 3-14, 2009.<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=Portland+Airport&daddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&geocode=&hl=en&mra=ls&dirflg=r&date=07%2F28%2F09&time=8:59am&ttype=dep&noexp=0&noal=0&sort=&tline=&sll=45.54878,-122.629155&sspn=0.092445,0.144367&ie=UTF8&ll=45.548679,-122.619438&spn=0.092445,0.144367&z=13&start=0 Map] from airport to [http://cegs.pdx.edu/stay/upl/ University Place Hotel] using public transport (note that the directions in your travel letter are better than the Google generated instructions here).<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&daddr=1721+SW+Broadway,+Portland,+OR+97201+(Cramer+Hall)&hl=en&geocode=FdVhtgIdZwqw-CHO0mMQPCwi0Q%3BFRN3tgIdvP2v-CHxCBg32xEzXA&mra=ls&dirflg=w&sll=45.51029,-122.681675&sspn=0.005782,0.009023&ie=UTF8&ll=45.510091,-122.68232&spn=0.005782,0.009023&z=17 Map] from [http://cegs.pdx.edu/stay/upl/ University Place Hotel] to [http://www.pdx.edu/campus-map Cramer Hall].<br />
<br />
=== Student Participants ===<br />
<br />
*[[Student Presentation]]<br />
*[[Groups]] example of [[connections in groups]]<br />
*[[Terminology]]<br />
*[[Questions]]<br />
<br />
===Lectures and Planned Activities===<br />
<br />
For information about editing this page, see [[Wikipedia:How to edit]].<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
|-valign="top" style="background:RoyalBlue"<br />
!width="20%"|Dates<br />
!width="25%"|Lecture Topics<br />
!width="15%"|Lecturers<br />
!width="25%"|Laboratory Topics<br />
!width="15%"|Laboratory Instructors <br />
|-valign="top" style="background:AliceBlue"<br />
| [[4-5 August]]<br />
| Introduction to and theoretical basis for ice sheet modeling. <br />
| Kees van der Veen, [[Nina Kirchner]] <br />
| [[Finite differencing|Finite differencing]] and [[Pragmatic Programming|pragmatic programming]] using Fortran[http://en.wikipedia.org/wiki/Fortran] 95...<br />
computing divergence and gradient...<br />
from conservation equation to matrix algebra...<br />
rheology and that which makes ice ice...<br />
simple, ideal models...<br />
that which makes ice-sheet modeling hard...<br />
| Gethin Williams, [[Ian Rutt]], [[Jesse Johnson]]<br />
|-valign="top" style="background:PowderBlue"<br />
| 6 August <br />
| [[Basal Conditions]], [[Data sets for ice sheet modeling]]<br />
| Alan Rempel, Slawek Tulaczyk and Ken Jezek<br />
| COMSOL Multiphysics<br />
| Olga Sergienko and Jesse Johnson<br />
|-valign="top" style="background:AliceBlue"<br />
| 7 August<br />
| The world of [[ice shelves]] and 'distributed stress-field solutions'.[[Modelling mountain glaciers]].<br />
| Todd Dupont, Olga Sergienko, and Brian Anderson<br />
| Linear Algebra of ice-sheet modeling, relaxation methods, finite-element methodology, solution of Laplace equation in arbitrary domain, creation of an ice-shelf flow-field model (snap shot of flow field), Models of the Ross Ice Shelf<br />
| Olga Sergeinko and Todd Dupont<br />
<br />
|-valign="top" style="background:PowderBlue"<br />
| 8 August<br />
| [[Student Presentation]]<br />
| Modeling School Students<br />
| open work day with breakfast & presentations in the morning<br />
| go to the farmer's market<br />
|-valign="top" style="background:AliceBlue"<br />
| 9 August<br />
| Free day; possible PDX tour<br />
|<br />
|<br />
|<br />
|-valign="top" style="background:PowderBlue"<br />
| 10 August<br />
| Excursion to Mt. Hood and [[Eliot Glacier field trip]]<br />
| Guided by [http://web.pdx.edu/~basagic/ Hassan Basagic]<br />
|<br />
|<br />
|-valign="top" style="background:AliceBlue"<br />
| 11 August<br />
| [[Quantifying model uncertainty]]<br />
| Charles Jackson and Patrick Heimbach<br />
| Uncertain lab, [[Dynamic response to the enhanced basal flow in the Greenland ice sheet]] Weli Wang<br />
| Charles Jackson, Patrick Heimbach, and Weli Wang<br />
|-valign="top" style="background:PowderBlue"<br />
| [[12-13 August]]<br />
| Introduction to Glimmer-CISM ([[Introduction to Glimmer I|Part I]], [[Introduction to Glimmer II|Part II]] and [[Glimmer-CISM|Part III]]); [[Higher order velocity schemes|Higher-order models]]<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], [[Stephen Price]], Bill Lipscomb, [[Jesse Johnson]]<br />
| Software development and [[Adding a module to Glimmer I|creating a module for Glimmer]], [[representing and manipulating data]]. [[Grounding line treatments]], presented by Sophie Nowicki. [[Verifying ice sheet models]], presented by Aitbala Sargent<br />
| [[Ian Rutt]], [[Magnus Hagdorn]], Gethin Williams, Stephen Price, Bill Lipscomb, [[Jesse Johnson]]<br />
|-valign="top" style="background:AliceBlue"<br />
| 14 August<br />
| [[Coupling the Cryosphere to other Earth systems]]<br />
| Bill Lipscomb and [[Ian Rutt]]<br />
| Community Climate System Model (CCSM) Lab<br />
| Bill Lipscomb, [[Jesse Johnson]], Stephen Price and [[Ian Rutt]]<br />
|}<br />
<br />
====[[Typical Daily Schedule]]====<br />
<br />
===Resources===<br />
<br />
Additional student/instructor resources for the Summer School:<br />
* List of [[Computing Resources and Room Description]]<br />
* Details of [[Eliot Glacier field trip]]<br />
* An outline [[Reading List]]<br />
* [[Notes]] from daily lectures<br />
* Portland [[dining and brewpub suggestions]]<br />
* [[PDX afterhours]]<br />
<br />
===Application and Registration===<br />
''The window for receipt of student applications has closed. Thank you for your interest in the program. ''<br />
<br />
The registration fee for the course is US $350.<br />
<br />
===Funding Agencies===<br />
<br />
<br />
{|<br />
|-valign="top"<br />
|[[Image:iscu.jpg|300 px]]<br />
|[[Image:scar.jpg|150 px]]<br />
|-valign="top"<br />
|[[Image:wcrp.jpg|200 px]]<br />
|[[Image:nsf_logo.gif|300px]]<br />
|-valign="top"<br />
|[[Image:cresis.jpg|100 px]]<br />
|[[Image:cires.jpg|350 px]]<br />
|-valign="top"<br />
|[[Image:IASC_logo_07_RGB.jpg|100 px]]<br />
|}<br />
<br />
===Organizing Committee===<br />
Christina Hulbe, Jesse Johnson, Cornelis van der Veen</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Eliot_Glacier_field_tripEliot Glacier field trip2009-08-06T18:22:55Z<p>Hoffman: </p>
<hr />
<div>[http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=eliot+glacier+mount+hood+oregon&sll=37.0625,-95.677068&sspn=52.68309,55.546875&ie=UTF8&t=h&z=13&iwloc=A Eliot Glacier] is a small northeast facing glacier on Mt. Hood with a debris-covered ablation zone. Andrew Fountain's research group at Portland State has been working on the glacier for many years. [http://web.pdx.edu/~basagic/ Hassan Basagic] will lead us on a walk to the glacier via Cooper Spur with assistance from Matt Hoffman and Adam Campbell on Monday the 10th. The following are some resources you may find helpful in preparing for the trip:<br />
<br />
* Spatial and morphological change on Eliot Glacier, Mount Hood, Oregon USA, Keith Jackson and Andrew Fountain, 2007, ''Annals of Glaciology'' [http://www.glaciers.pdx.edu/fountain/MyPapers/Jackson&Fountain2007_EliotGlacier.pdf pdf]<br />
<br />
* [http://glaciers.research.pdx.edu/assets/index.php?search_ass=eliot&search+assets=submit Photographs] in the PDX Glaciers image database (357 of them!).<br />
<br />
* Eliot Glacier change [http://geopulse.org/kjack/eliotphotos.php photos].<br />
<br />
* Historical glacier and climate fluctuations at Mount Hood, Oregon, Karl Lillquist and Karen Walker, 2006, ''AAAR'' [http://docs.google.com/gview?a=v&pid=gmail&attid=0.1&thid=1228a713faf39d87&mt=application%2Fpdf pdf]</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Dining_and_brewpub_suggestionsDining and brewpub suggestions2009-08-06T18:16:25Z<p>Hoffman: </p>
<hr />
<div>==worth the modest effort==<br />
* The New Old [http://www.newoldlompoc.com/lompochome.html Lompoc]<br />
* [http://www.rootsorganicbrewing.com/ Roots Brewing]<br />
* [http://www.deschutesbrewery.com/brewery/brew-pubs/portland-pub/default.aspx Deschutes Portland Pub]<br />
* [http://www.pdxgreendragon.com Green Dragon]<br />
* Lucky Labrador Brew Pub ([http://maps.google.com/maps?hl=en&um=1&ie=UTF-8&cid=0,0,9603365234564240590&fb=1&split=1&gl=us&dq=Lucky+Labrador+Brew+Pub+loc:+Portland,+OR&daddr=915+SE+Hawthorne+Blvd,+Portland,+OR+97214&geocode=14269711352390392557,45.512214,-122.656311&ei=_ZV4Su_8C4TWtgOY7I3qBA&sa=X&oi=local_result&ct=directions-to&resnum=1 map]) or Beer Hall ([http://maps.google.com/maps?hl=en&um=1&ie=UTF-8&cid=0,0,8661628585978236297&fb=1&split=1&gl=us&dq=Lucky+Labrador+Beer+Hall+loc:+Portland,+OR&daddr=1945+NW+Quimby+St,+Portland,+OR+97209-1712&geocode=10912875213551511075,45.533498,-122.691911&ei=hZZ4StPYD4SCsgPd-5D0BA&sa=X&oi=local_result&ct=directions-to&resnum=1 map])<br />
* [http://widmer.com/gasthaus.aspx Widmer Gasthaus]<br />
<br />
* Mio Sushi ([http://maps.google.com/maps?hl=en&um=1&ie=UTF-8&f=d&iwstate1=dir:to&daddr=1317+NW+Hoyt+St+Portland,+OR+97209&fb=1&geocode=7502738093056991154,45.527271,-122.684193&ei=Tph4SuKIFIKKsgO2stHZBA&sa=X&oi=manybox&resnum=1&ct=17 map])<br />
* [http://www.sungaripearl.com/ Sungari Pearl]<br />
* [http://www.swagat.com/ Swagat]<br />
* [http://www.karamrestaurant.com/ Karam] Reasonably priced Lebanese sit-down, rarely crowded ([http://maps.google.com/maps?oe=utf-8&rls=com.ubuntu:en-US:unofficial&client=firefox-a&um=1&ie=UTF-8&cid=0,0,10872995562513346076&fb=1&split=1&gl=us&dq=karam+portland&daddr=316+SW+Stark+St,+Portland,+OR+97204&geocode=12627350705933145637,45.520304,-122.674627&ei=1R17Sqa6BYicsgPz0I3vCg&sa=X&oi=local_result&ct=directions-to&resnum=1 map]).<br />
<br />
==close to campus==<br />
* Pho Thanh Long ([http://www.urbanspoon.com/r/24/282826/restaurant/Downtown/Pho-Thanh-Long-Portland map])<br />
* [http://paccinirestaurant.com/ Paccini]<br />
* Market Street Pub ([http://maps.google.com/maps?hl=en&um=1&ie=UTF-8&cid=0,0,3766099773440613361&fb=1&split=1&gl=us&dq=mcmenamins+market+street+pub+loc:+Portland,+OR&daddr=1526+SW+10th+Ave,+Portland,+OR+97201&geocode=10751522671004167164,45.514499,-122.685073&ei=AJt4SvXYKIfOsQPj-MncBA&sa=X&oi=local_result&ct=directions-to&resnum=1 map])<br />
* [http://www.southparkseafood.com/ Southpark]<br />
* [http://foodcartsportland.com/category/location/downtown-location/sw-4th-and-hall-psu/ 4th Avenue and Hall Street Food Carts] Lots of cheap carts, weekday lunch only.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Dining_and_brewpub_suggestionsDining and brewpub suggestions2009-08-06T18:15:45Z<p>Hoffman: </p>
<hr />
<div>==worth the modest effort==<br />
* The New Old [http://www.newoldlompoc.com/lompochome.html Lompoc]<br />
* [http://www.rootsorganicbrewing.com/ Roots Brewing]<br />
* [http://www.deschutesbrewery.com/brewery/brew-pubs/portland-pub/default.aspx Deschutes Portland Pub]<br />
* [http://www.pdxgreendragon.com Green Dragon]<br />
* Lucky Labrador Brew Pub ([http://maps.google.com/maps?hl=en&um=1&ie=UTF-8&cid=0,0,9603365234564240590&fb=1&split=1&gl=us&dq=Lucky+Labrador+Brew+Pub+loc:+Portland,+OR&daddr=915+SE+Hawthorne+Blvd,+Portland,+OR+97214&geocode=14269711352390392557,45.512214,-122.656311&ei=_ZV4Su_8C4TWtgOY7I3qBA&sa=X&oi=local_result&ct=directions-to&resnum=1 map]) or Beer Hall ([http://maps.google.com/maps?hl=en&um=1&ie=UTF-8&cid=0,0,8661628585978236297&fb=1&split=1&gl=us&dq=Lucky+Labrador+Beer+Hall+loc:+Portland,+OR&daddr=1945+NW+Quimby+St,+Portland,+OR+97209-1712&geocode=10912875213551511075,45.533498,-122.691911&ei=hZZ4StPYD4SCsgPd-5D0BA&sa=X&oi=local_result&ct=directions-to&resnum=1 map])<br />
* [http://widmer.com/gasthaus.aspx Widmer Gasthaus]<br />
<br />
* Mio Sushi ([http://maps.google.com/maps?hl=en&um=1&ie=UTF-8&f=d&iwstate1=dir:to&daddr=1317+NW+Hoyt+St+Portland,+OR+97209&fb=1&geocode=7502738093056991154,45.527271,-122.684193&ei=Tph4SuKIFIKKsgO2stHZBA&sa=X&oi=manybox&resnum=1&ct=17 map])<br />
* [http://www.sungaripearl.com/ Sungari Pearl]<br />
* [http://www.swagat.com/ Swagat]<br />
* [http://www.karamrestaurant.com/ Karam] Reasonably priced Lebanese sit-down, rarely crowded ([http://www.google.com/search?q=karam+portland+&ie=utf-8&oe=utf-8&aq=t&rls=com.ubuntu:en-US:unofficial&client=firefox-a# map]).<br />
<br />
==close to campus==<br />
* Pho Thanh Long ([http://www.urbanspoon.com/r/24/282826/restaurant/Downtown/Pho-Thanh-Long-Portland map])<br />
* [http://paccinirestaurant.com/ Paccini]<br />
* Market Street Pub ([http://maps.google.com/maps?hl=en&um=1&ie=UTF-8&cid=0,0,3766099773440613361&fb=1&split=1&gl=us&dq=mcmenamins+market+street+pub+loc:+Portland,+OR&daddr=1526+SW+10th+Ave,+Portland,+OR+97201&geocode=10751522671004167164,45.514499,-122.685073&ei=AJt4SvXYKIfOsQPj-MncBA&sa=X&oi=local_result&ct=directions-to&resnum=1 map])<br />
* [http://www.southparkseafood.com/ Southpark]<br />
* [http://foodcartsportland.com/category/location/downtown-location/sw-4th-and-hall-psu/ 4th Avenue and Hall Street Food Carts] Lots of cheap carts, weekday lunch only.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/PDX_afterhoursPDX afterhours2009-08-06T17:44:20Z<p>Hoffman: </p>
<hr />
<div>==tuesday Aug 4, 2009==<br />
* meet at [http://paccinirestaurant.com/ Paccini's] pub at 7:30<br />
<br />
==wednesday Aug 5, 2009==<br />
[[Image:Erin_adam.JPG|thumb|right|300px|The PDX Afterhours king & queen in Tube Bar, home of Wednesday night $1 Miller High Life.]]<br />
* after FORTRAN session, leave PSU to go to [http://maps.google.com/maps?hl=en&client=firefox-a&rls=com.ubuntu:en-US:unofficial&hs=zuU&um=1&ie=UTF-8&q=sushi+ichiban+portland&fb=1&split=1&gl=us&view=text&latlng=899833397715679289 Sushi Ichiban]. Adam Campbell will guide you.<br />
* go to [http://www.groundkontrol.com Ground Kontrol] a retro arcade with beer<br />
* go to [http://www.voodoodoughnut.com Voodoo Doughnut], please someone buy Ian Rutt the $5 doughnut.<br />
<br />
<br />
==thursday Aug 6, 2009==<br />
[http://amontobin.com/field/ Amon Tobin] and [http://www.pitchblack.co.nz/?s1=index Pitch Black], along with two opening bands, are playing at the Roseland Theatre (8 NW 6th Ave) Thursday night (starting at 9:00 pm or thereabouts). Tickets are $26 (available online at [http://ticketswest.rdln.com/Venue.aspx?ven=ROS TicketsWest]). The music is best described as sampled electronica (Amon Tobin) and Kiwi-style dub (Pitch Black). I'd expect a late night of electronic music: an evening nap may be in order! Jeremy's already got his ticket and can fill you in with more, including a sample of the music.<br />
<br />
*other local music recommendations from Adam<br />
<br />
'''Boy Eats Drum Machine, French Miami, Southern Belle and Electric Opera Company''' - Indie Rock -<br />
Thu., Aug. 6, 9 p.m.<br />
$6-8<br />
Berbati's Pan<br />
10 SW 3rd Ave.<br />
Downtown<br />
<br />
'''Nurses, Inside Voices and Slaves''' - Indie Rock -<br />
Thu., Aug. 6, 8:30 p.m.<br />
$7<br />
Holocene <br />
1001 SE Morrison<br />
Southeast<br />
<br />
==friday Aug 7, 2009==<br />
[http://www.biteoforegon.com/ The Bite of Oregon] is a food festival that takes place at Tom McCall Waterfront Park. Featuring food, wine, beer and entertainment from Oregon. Entry is $8, food and beverages are extra.<br />
<br />
[http://www.pioneercourthousesquare.org/calendar_august.htm Flicks on the Bricks] will be showing Jurassic Park at dusk outside at Pioneer Courthouse Square, FREE (including popcorn). (10 minute walk)<br />
<br />
==saturday Aug 8, 2009==<br />
[http://www.biteoforegon.com/ The Bite of Oregon] is a food festival that takes place at Tom McCall Waterfront Park. Featuring food, wine, beer and entertainment from Oregon. Entry is $8, food and beverages are extra.<br />
<br />
==sunday Aug 9, 2009==<br />
The [http://providence.org/bridgepedal/ Portland Bridge Pedal] is a fun event where you can go on 14, 24, or 37 mi. bike ride over Portland's Bridges. Adam is trying to assemble a group to go on Sunday morning. Please speak with him if you are interested in going to this. I tentatively have 4 bikes I can get ahold of.<br />
<br />
[http://www.biteoforegon.com/ The Bite of Oregon] is a food festival that takes place at Tom McCall Waterfront Park. Featuring food, wine, beer and entertainment from Oregon. Entry is $8, food and beverages are extra.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Team_6_SolutionTeam 6 Solution2009-08-06T15:16:45Z<p>Hoffman: </p>
<hr />
<div><source lang="fortran"><br />
<br />
!> 1D Convection Diffusion equations solver in Fortran<br />
!!<br />
!! Solves the equation:<br />
!!<br />
!!\f[<br />
!!\frac{du}{dt}=\frac{d}{dx}D(x)\frac{du}{dx} + C(x)\frac{du}{dx}+F(x)u-S(x)<br />
!!\f]<br />
!! for \f$u\f$, given functions for \f$D\f$, \f$C\f$, \f$F\f$, and \f$S\f$, defined in this program<br />
!!<br />
!! Explicit methods are used<br />
!!<br />
!! \author Matt & Erin & Sophie (jvj)<br />
!! \date 8-5-09<br />
<br />
program OurCode<br />
<br />
<br />
implicit none<br />
<br />
! local variables<br />
<br />
integer :: nx ! Number of nodes<br />
real, parameter :: dt = 1 ! length time step (years)<br />
integer, parameter :: nt = 1000 ! number of time steps<br />
integer :: t ! current time step<br />
real :: xl ! start of domain<br />
real :: xr ! end of domain (m)<br />
real :: Const ! (2*A)*(rho*grav)^n/(n+2)<br />
real, parameter :: dx = 1000 ! node spacing (m)<br />
real, parameter :: dbdx = -0.0 ! bedslope (m/m)<br />
real, parameter :: g = 9.8 ! gravity (m/s2)<br />
real, parameter :: rho = 917 ! density of ice (g/cm3)<br />
real, parameter :: A = 1e-16 ! Glen rate factor (kPa-3 a-1)<br />
real, parameter :: n = 3 ! Glen Flow Exponent (unitless)<br />
real, parameter :: M0 = 4.0 ! m/yr<br />
real, parameter :: M1 = 2.0/10000.0 ! m/yr/m<br />
<br />
real, dimension(:), allocatable :: elev ! surface elevation (m)<br />
real, dimension(:), allocatable :: bedelev ! bed elevation (m), y origin is at the bed elev in the left of the domain. up is up!<br />
real, dimension(:), allocatable :: H ! thicknesss (m)<br />
real, dimension(:), allocatable :: Mb ! Mass Balance (m/yr)<br />
real, dimension(:), allocatable :: dhdt_store ! space to store du/dx<br />
real, dimension(:), allocatable :: xref ! refence distance<br />
<br />
real, dimension(:), allocatable :: d ! diffusivity coeff<br />
real, dimension(:), allocatable :: mflux ! mass flux between grid points<br />
<br />
integer :: ii ! a counter<br />
integer :: jj <br />
integer :: errstat ! for error checking<br />
<br />
<br />
! Set up grid <br />
! Space<br />
xl = 0.0<br />
xr = 60000.0 !(m) our guess at how large of a domain we need (started with 60km)<br />
nx = int( ((xr - xl) / dx) +1 )<br />
<br />
! let's allocate some memory<br />
allocate(elev(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate elev")<br />
<br />
allocate(xref(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate xref")<br />
<br />
allocate(bedelev(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate bedelev")<br />
<br />
allocate(H(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate H")<br />
<br />
allocate(Mb(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate Mb")<br />
<br />
allocate(d(nx-1),stat=errstat)<br />
call checkerr(errstat,"failed to allocate d")<br />
<br />
allocate(mflux(nx-1),stat=errstat)<br />
call checkerr(errstat,"failed to allocate mflux")<br />
<br />
allocate(dhdt_store(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate dhdt_store")<br />
<br />
<br />
!We could've done this as a function, Matt and Erin revolted vs. Sophie. Said No.<br />
do ii=1,nx <br />
bedelev(ii) = dx*dbdx*real(ii-1) !reminder, at x = 0, bedelev = 0 m<br />
Mb(ii) = M0 - M1*(dx*real(ii-1)) !Mass balance equation (m/yr)<br />
xref(ii)= real(ii-1)*dx<br />
enddo<br />
<br />
!Constant C<br />
Const = (2.*A)*(rho*g)**n/real(n+2) <br />
<br />
!Initial conditions<br />
H = 0.0 !set thickness everywhere in x as 0.0 m<br />
elev = bedelev + H !No glacier yet<br />
<br />
time_loop: do t=1,nt<br />
<br />
spatial_midpoint_loop: do ii=1,nx-1 <br />
!Calculate flux midway between elevation points<br />
mflux(ii) = Const * ((H(ii)+H(ii+1))/2.0)**(n+2) * ((elev(ii+1)-elev(ii))/(dx))**n<br />
enddo spatial_midpoint_loop<br />
<br />
H(1) = 0.0 ! left boundary condition (this could be moved above time_loop)<br />
<br />
spatial_gridpoint_loop: do jj=2,nx-1 <br />
dhdt_store(jj) = - (mflux(jj+1)-mflux(jj))/dx + Mb(jj)<br />
H(jj) = H(jj) + dhdt_store(jj) * dt !new thickness<br />
!search for terminus location... <br />
if (H(jj)<0) then<br />
H(jj) = 0<br />
!write (*,*) jj, (jj-1)*dx<br />
endif<br />
enddo spatial_gridpoint_loop<br />
<br />
<br />
!to update 'elev'<br />
elev = bedelev + H<br />
<br />
!output geometry (also need to output geom at t=0 ?)<br />
write (*,*) H<br />
end do time_loop<br />
<br />
contains <br />
<br />
subroutine checkerr(errstat,msg)<br />
implicit none<br />
integer, intent(in) :: errstat<br />
character(*), intent(in) :: msg <br />
if (errstat /= 0) then<br />
write(*,*) "ERROR:", msg<br />
stop<br />
end if<br />
end subroutine checkerr<br />
<br />
<br />
end program OurCode<br />
<br />
</source></div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Team_6_SolutionTeam 6 Solution2009-08-06T15:15:44Z<p>Hoffman: New page: !> 1D Convection Diffusion equations solver in Fortran !! !! Solves the equation: !! !!\f[ !!\frac{du}{dt}=\frac{d}{dx}D(x)\frac{du}{dx} + C(x)\frac{du}{dx}+F(x)u-S(x) !!\f] !! for \f$u\f$...</p>
<hr />
<div>!> 1D Convection Diffusion equations solver in Fortran<br />
!!<br />
!! Solves the equation:<br />
!!<br />
!!\f[<br />
!!\frac{du}{dt}=\frac{d}{dx}D(x)\frac{du}{dx} + C(x)\frac{du}{dx}+F(x)u-S(x)<br />
!!\f]<br />
!! for \f$u\f$, given functions for \f$D\f$, \f$C\f$, \f$F\f$, and \f$S\f$, defined in this program<br />
!!<br />
!! Explicit methods are used<br />
!!<br />
!! \author Matt & Erin & Sophie (jvj)<br />
!! \date 8-5-09<br />
<br />
program OurCode<br />
<br />
<br />
implicit none<br />
<br />
! local variables<br />
<br />
integer :: nx ! Number of nodes<br />
real, parameter :: dt = 1 ! length time step (years)<br />
integer, parameter :: nt = 1000 ! number of time steps<br />
integer :: t ! current time step<br />
real :: xl ! start of domain<br />
real :: xr ! end of domain (m)<br />
real :: Const ! (2*A)*(rho*grav)^n/(n+2)<br />
real, parameter :: dx = 1000 ! node spacing (m)<br />
real, parameter :: dbdx = -0.0 ! bedslope (m/m)<br />
real, parameter :: g = 9.8 ! gravity (m/s2)<br />
real, parameter :: rho = 917 ! density of ice (g/cm3)<br />
real, parameter :: A = 1e-16 ! Glen rate factor (kPa-3 a-1)<br />
real, parameter :: n = 3 ! Glen Flow Exponent (unitless)<br />
real, parameter :: M0 = 4.0 ! m/yr<br />
real, parameter :: M1 = 2.0/10000.0 ! m/yr/m<br />
<br />
real, dimension(:), allocatable :: elev ! surface elevation (m)<br />
real, dimension(:), allocatable :: bedelev ! bed elevation (m), y origin is at the bed elev in the left of the domain. up is up!<br />
real, dimension(:), allocatable :: H ! thicknesss (m)<br />
real, dimension(:), allocatable :: Mb ! Mass Balance (m/yr)<br />
real, dimension(:), allocatable :: dhdt_store ! space to store du/dx<br />
real, dimension(:), allocatable :: xref ! refence distance<br />
<br />
real, dimension(:), allocatable :: d ! diffusivity coeff<br />
real, dimension(:), allocatable :: mflux ! mass flux between grid points<br />
<br />
integer :: ii ! a counter<br />
integer :: jj <br />
integer :: errstat ! for error checking<br />
<br />
<br />
! Set up grid <br />
! Space<br />
xl = 0.0<br />
xr = 60000.0 !(m) our guess at how large of a domain we need (started with 60km)<br />
nx = int( ((xr - xl) / dx) +1 )<br />
<br />
! let's allocate some memory<br />
allocate(elev(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate elev")<br />
<br />
allocate(xref(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate xref")<br />
<br />
allocate(bedelev(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate bedelev")<br />
<br />
allocate(H(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate H")<br />
<br />
allocate(Mb(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate Mb")<br />
<br />
allocate(d(nx-1),stat=errstat)<br />
call checkerr(errstat,"failed to allocate d")<br />
<br />
allocate(mflux(nx-1),stat=errstat)<br />
call checkerr(errstat,"failed to allocate mflux")<br />
<br />
allocate(dhdt_store(nx),stat=errstat)<br />
call checkerr(errstat,"failed to allocate dhdt_store")<br />
<br />
<br />
!We could've done this as a function, Matt and Erin revolted vs. Sophie. Said No.<br />
do ii=1,nx <br />
bedelev(ii) = dx*dbdx*real(ii-1) !reminder, at x = 0, bedelev = 0 m<br />
Mb(ii) = M0 - M1*(dx*real(ii-1)) !Mass balance equation (m/yr)<br />
xref(ii)= real(ii-1)*dx<br />
enddo<br />
<br />
!Constant C<br />
Const = (2.*A)*(rho*g)**n/real(n+2) <br />
<br />
!Initial conditions<br />
H = 0.0 !set thickness everywhere in x as 0.0 m<br />
elev = bedelev + H !No glacier yet<br />
<br />
time_loop: do t=1,nt<br />
<br />
spatial_midpoint_loop: do ii=1,nx-1 <br />
!Calculate flux midway between elevation points<br />
mflux(ii) = Const * ((H(ii)+H(ii+1))/2.0)**(n+2) * ((elev(ii+1)-elev(ii))/(dx))**n<br />
enddo spatial_midpoint_loop<br />
<br />
H(1) = 0.0 ! left boundary condition (this could be moved above time_loop)<br />
<br />
spatial_gridpoint_loop: do jj=2,nx-1 <br />
dhdt_store(jj) = - (mflux(jj+1)-mflux(jj))/dx + Mb(jj)<br />
H(jj) = H(jj) + dhdt_store(jj) * dt !new thickness<br />
!search for terminus location... <br />
if (H(jj)<0) then<br />
H(jj) = 0<br />
!write (*,*) jj, (jj-1)*dx<br />
endif<br />
enddo spatial_gridpoint_loop<br />
<br />
<br />
!to update 'elev'<br />
elev = bedelev + H<br />
<br />
!output geometry (also need to output geom at t=0 ?)<br />
write (*,*) H<br />
end do time_loop<br />
<br />
contains <br />
<br />
subroutine checkerr(errstat,msg)<br />
implicit none<br />
integer, intent(in) :: errstat<br />
character(*), intent(in) :: msg <br />
if (errstat /= 0) then<br />
write(*,*) "ERROR:", msg<br />
stop<br />
end if<br />
end subroutine checkerr<br />
<br />
<br />
end program OurCode</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Kees%27_assignmentKees' assignment2009-08-05T21:31:40Z<p>Hoffman: </p>
<hr />
<div>==Model equation==<br />
:<math>\frac{\partial H}{\partial t} = - \frac{\partial}{\partial x}\left(-D(x) \frac{\partial h}{\partial x}\right) + M</math><br />
<br />
where<br />
<br />
<math>D(x) = C H^{n+2}\left|\frac{\partial h} {\partial x}\right| ^{n-1},</math><br />
<br />
and <br />
<br />
:<math>C = \frac{2 A}{n+2} \left(\rho g\right)^n</math><br />
<br />
==Model parameters==<br />
*<math>\frac{\partial b}{\partial x} = -0.1</math><br />
* <math>M(x) = M_0 - x M_1 = 4.0\times 10^{-3} - 0.2 x</math> km/yr<br />
* <math>\rho</math> = 920 <math>kg/m^3</math><br />
*g=9.8 <math>m/s^2</math><br />
*A = 1e-16 <math> Pa^{-3} a^{-1}</math><br />
* n=3<br />
* dx=1.0 km<br />
<br />
==Boundary conditions==<br />
<br />
* <math>H_l = 0 </math> (left boundary)<br />
<br />
* <math> H_r>0</math> (right boundary)<br />
<br />
==Numerical tips==<br />
<br />
Use a staggered grid such that the <math>D(x_{j+1/2})</math> are computed at the '''centers''' of the grid (as opposed to the vertices, as we have been doing), so<br />
<br />
:<math>D(x_{j+1/2}) = C \left(\frac{H_j + H_{j+1}}{2}\right)^{n+2} \left(\frac{h_{j+1} - h_j}{\Delta x}\right)^{n-1}.</math><br />
<br />
From the diffusivity, the flux is computed<br />
<br />
:<math>\phi_{i+1/2} = D(x_{i+1/2}) \frac{\partial h}{\partial x}</math>,<br />
<br />
where <br />
<br />
:<math>\frac{\partial h}{\partial x} = \frac{h_{i+1}-h_{i}}{\Delta x}</math><br />
<br />
and then the flux (<math>\phi_i</math>) can be used to compute the rate of change of the surface from<br />
<br />
:<math>\frac{\partial H}{\partial t} = -\frac{\partial }{\partial x} H\bar u + M = - \frac{\phi_{i+1/2} - \phi_{i-1/2} }{\Delta x} + M</math></div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Finite_differencing_IFinite differencing I2009-08-04T19:00:24Z<p>Hoffman: /* Exercise */</p>
<hr />
<div>==Overview==<br />
<br />
In one dimension, the general form of the [http://en.wikipedia.org/wiki/Convection_diffusion_equation convection diffusion equation] is<br />
<br />
:<math>\frac{\partial u(x,t) }{\partial t} - \frac{\partial}{\partial x} D(x) \frac{\partial}{\partial x} u(x,t) - C(x)\frac{\partial}{\partial x} u(x,t) = S(x,t), </math><br />
<br />
<math>u</math> is a general variable, <math>D</math> is a spatially-varying diffusivity, <math>C</math> is a spatially-varying convection rate, and <math>S</math> is a source term. The second term on the left represents diffusion of a solute or other material property, the third term represent convection.<br />
<br />
This equation can be used to model a wide range of phenomena, including the distribution of temperatures (or energy conservation) in an ice sheet. It also bears similarity to the equations expressing conservation of momentum, and analysis of the numerical solutions to this equation are representative of the analysis of numerous numerical treatments in computational fluid dynamics. For these reasons, we will make convection diffusion our first "model problem", or problem to solve in order to strengthen intuition.<br />
<br />
We will take a step-wise approach to solving this equation, first solving for the diffusion, or parabolic equation, then solving for the convection portion. Finally, we will solve the complete equation. Through this process, we will be looking at the stability of the numerical methods used to solve the equation. <br />
<br />
You should think of this as a starting point for both your learning to program, as well as your learning to solve PDEs with programs.<br />
<br />
==Diffusion and explicit solution==<br />
First, we will solve a simplified version of the equation ''explicitly''. Explicit here refers to the way (or what) the differentiation operators are applied to. In this situation they are directly applied to the solution at the present time step in order to determine the next time. <br />
<br />
[[Image:Explicit_method-stencil.png|left|thumb|500 px|The Stencil for the most common explicit method for the parabolic equation.]]<br />
<br />
To better understand, apply the idea to what is called the parabolic, diffusion, or sometimes heat equation. In terms of convection diffusion this is <math>D(x,t) = 1</math>, <math>C(x,t) = 0</math> and <math>S(x,t) = 0</math>,<br />
<br />
:<math> \frac{\partial u(x,t) }{\partial t} = \frac{\partial ^2 u(x,t)}{\partial x^2},</math><br />
<br />
The finite difference approximation of the equation is<br />
<br />
:<math> \frac{u(x,t+\Delta t) - u(x,t)}{\Delta t} = \frac{u(x+\Delta x,t) - 2u(x,t) + u(x-\Delta x,t)}{\Delta x^2}.</math><br />
<br />
Where both derivative approximations are known from the previous lesson. One is called the 'forward Euler' approximation of the time derivative, and the other is the second order accurate, centered second derivative. <br />
<br />
The equation is then algebraically solved for <math> u(x,t)</math> <br />
<br />
:<math>u(x,t + \Delta t) = u(x,t) + \Delta t \frac{u(x+\Delta x,t) - 2u(x,t) +<br />
u(x-\Delta x,t)}{\Delta x^2}</math><br />
<br />
There you have it, a way to compute the future, using the present. Your first task will be to change this into an algorithm.<br />
<br />
One final note, the stencil at the left makes great sense, and understanding it will help make other algorithms clear. However, to understand it we must modify the notation a little. Make super-scripts <math>n</math> refer to time and subscripts <math>j</math> refer to space. The previous equation becomes<br />
<br />
: <math>u_j^{n+1} = u_j^n + \frac{\Delta t}{\Delta x ^2} \left( u_{j-1}^{n} - 2 u_j^{n} + u_{j+1}^{n} \right).</math><br />
<br />
Make sure you recognize how this corresponds with the diagram of the explicit stencil.<br />
<br />
==Numerical solution==<br />
One can see from the above equation that one way to numerically solve the parabolic equation is to use stencils or operators for computing second derivatives. Once the derivative is computed, finding the solution corresponding to the next time step, <math>u(x,t+\Delta t)</math> is just a matter of multiplying the derivative by <math>\Delta t</math> and adding <math> u(x,t)</math>. <br />
<br />
In psuedocode, the solution looks something like<br />
<br />
<source lang=text><br />
<br />
Initialize variables<br />
<br />
loop t over time:<br />
loop i over space:<br />
u(t,i) = u(t-1,i) + delta_t * (u_old(t-1,i-1) - 2*u_old(t-1,i) + u_old(t-1,i+1) ) / delta_x**2<br />
store solution as needed<br />
end loop over space<br />
end loop over time<br />
<br />
</source><br />
<br />
Note that in psuedo-code the final result is an T by N array (T is time steps, N space points). This is typical, the data structures used to store output is often as complex as the algorithm. Sometimes more. <br />
<br />
You'll need to do this in fortran 90, and you'll find Gethin's [[Pragmatic Programming]] very helpful. Plotting is also a key to understanding simulation output. Again, Gethin's got what you need, although you'll also find the [http://matplotlib.sourceforge.net/ matplotlib] documentation helpful (should be familiar if you've used Matlab).<br />
<br />
===Plotting results===<br />
The above line <br />
<source lang=text><br />
store solution as needed<br />
</source><br />
<br />
is perhaps, frustratingly vague. Gethin's [http://source.ggy.bris.ac.uk/wiki/Fortran1 Fortran1: Fortran for beginners] discusses input/output, IO, so the writing should not be a problem. Even<br />
<br />
<source lang=fortran><br />
write(*,*) u<br />
</source><br />
<br />
will suffice, if you can re-direct program output to a file. Assuming your program is called '''cd_prg''', this is done with<br />
<br />
<source lang=bash><br />
>>.\cd_prg >> data<br />
</source><br />
<br />
where the '''>>''' operator will replace whatever was previously in the file '''data'''.<br />
<br />
As for reading it into Python and plotting it, the main challenges are reading in the data and animating the time series data. Here is some code to do that<br />
<br />
<source lang=python><br />
!/usr/bin/env python<br />
<br />
# Import only what is needed<br />
from numpy import loadtxt,shape,linspace<br />
from pylab import plot,show,clf,show,ion<br />
<br />
# Import data file, called 'data'<br />
d=loadtxt('data')<br />
<br />
#Determine how much data came in<br />
dims = shape(d)<br />
<br />
clf() # Clears the screen<br />
ion() # Interactive plot mode, critical for animation<br />
<br />
# x data, note that this must correspond to program's domain<br />
x = linspace(0,1,dims[1]) <br />
<br />
# Initial plot, very Matlab(ish), note return of plot handle that allows plot to<br />
# be altered elsewhere in code.<br />
ph,=plot(x,d[0,:],'k') <br />
ph.figure.show() # matplot lib requires show to be called<br />
<br />
# Loop to plot each time step<br />
for i in range(1,dims[0]):<br />
ph.set_ydata(d[i,:]) # Only update y data (faster than replot)<br />
ph.figure.show()<br />
</source><br />
<br />
==Exercise==<br />
#Using the algorithm for the 'explicit' method, find a numerical solution to this heat conduction problem:<br />
<br />
:<math> \frac{\partial u(x,t) }{\partial t} = \frac{\partial ^2 u(x,t)}{\partial x^2},</math><br />
:<math> u(x,0) = sin(\pi x)</math><br />
:<math> u(0,t) = u(1,t) = 0</math><br />
<br />
Use <math>\Delta x</math> = 0.1, <math>\Delta t</math> = 0.005125, and <math>t_{end}</math> = 1.025. Compare the computed solution to the exact solution <math> u(x,t) = \exp(-\pi^2 t) \sin(\pi x)</math>. Repeat the experiment with <math>\Delta t</math> = 0.006 and <math>t_{end}</math> = 1.026.<br />
====[[Group one, parabolic, explicit]]====<br />
====[[Group six, parabolic, explicit]]====<br />
<br />
==Convection and numerical stability==<br />
For this unit consider the first-order hyperbolic PDE<br />
:<math> \frac{\partial u}{\partial t} = - v \frac{\partial u}{\partial x}.</math><br />
<br />
Mathematically, this statement is saying that a quantity <math>u(x,t)</math> exists on some grid, and is being carried along by a wind with velocity <math>v</math>. Before applying finite difference operators, clean up the notation so that super-scripts (<math>u^n</math>) refer to time and subscripts (<math>u_j</math>) refer to space.<br />
<br />
Continuing to work with explicit schemes, the machinery of descritization allows us to quickly move to the form<br />
:<math> \frac{u_j^{n+1} - u_j^n}{\Delta t} = -v \left( \frac{u^n_{j+1} - u_{j-1}^n}{2\Delta x} \right ) </math><br />
<br />
and solve to give a recurrence relation<br />
<br />
:<math> u_j^{n+1} =u_j^n - \frac{v \Delta t}{2\Delta x} \left(u^n_{j+1} - u_{j-1}^n \right ) </math><br />
<br />
===von Nuemann Stability Analysis===<br />
Before implementing this, consider the stability of the solutions by assuming a very generic form of solution<br />
:<math> u_j^n = A(k)^n e^{ijk}</math>.<br />
<br />
Maybe you can recall a course in differential equations where you spent the better part of a semester making similar substitutions into equations to find solutions? This [[Wikipedia:Euler's formula | complex exponential]] is the Swiss Army knife of functions, and satisfies many equations.<br />
<br />
In our assumed solutions the amplitude is <math>A^n(k)</math> (exponentiated to higher powers with time) and the ''wave number'' is <math>k = \frac{2 \pi}{\lambda}</math>. Said in words, we assume that the solution will be oscillatory (recall <math> e^{ikx} = cos(kx) + i sin(kx)</math>) and that the solution's amplitude will depend on the frequency, or <math>k</math>. In our discrete case <math>j</math> serves as a proxy for space, <math>x</math>.<br />
<br />
:<math> A^{n+1} e^{ijk} = A^{n} e^{ijk} - \frac{v \Delta t}{2\Delta x} \left( A^{n} e^{i(j+1)k} - A^{n} e^{i(j-1)k} \right ) </math><br />
<br />
divide through by <math> A^n e^{ijk}</math><br />
<br />
:<math> A = 1 - \frac{v \Delta t}{2\Delta x} (e^{ik} - e^{-ik}) = 1 - i\frac{v \Delta t}{\Delta x} sin k</math><br />
<br />
''What does that mean?'' If <math>|A|^2>1</math>, the solution grows without bound in time, because each time step applies an higher exponent to <math> A(k)^n</math>. So, this solution scheme is unstable for all time steps, and space steps. Bummer.<br />
<br />
===Getting stability===<br />
Now, let's try that again, and when discretizing do what will become a favorite trick, average or smear the values of the function. A new discretization will be <br />
<br />
:<math> u_j^{n+1} = \frac{1}{2} (u_{j+1}^n + u_{j-1}^n) - \frac{v \Delta t}{2\Delta x} \left(u^n_{j+1} - u_{j-1}^n \right ), </math><br />
<br />
this is called the ''Lax method''. Now consider stability in the same way. Omitting some algebra<br />
:<math> A = cos k - i \frac{v \Delta t}{2\Delta x} sin k, </math><br />
is the amplitude. Requiring <br />
:<math> |A|^2 = cos^2 k + \left(\frac{v \Delta t}{2\Delta x}\right)^2 sin^2 k \leq 1, </math><br />
to avoid unbound growth, yields<br />
:<math> \frac{|v|\Delta t}{\Delta x} \leq 1.</math><br />
<br />
This is called the ''[[Wikipedia:Courant–Friedrichs–Lewy condition |Courant-Friedrichs-Levy]]'' stability criterion. It states that the information on a grid has a velocity of <math> \frac{\Delta x}{\Delta t} </math> and that the velocity in the system can not be exceeded by it (causing the ratio to exceed one). For such a thing to happen would be completely unphysical. Consider what happens when an object exceeds the velocity of waves in the media that carries it, a sonic boom. This is a "numerical boom".<br />
<br />
==Exercises==<br />
#Implement the Lax method for a linear system. Is this method explicit or implicit? Use a 10 unit domain and begin with a height 1.0 square wave between 4.5 <math>\leq x \leq</math> 5.5. Fix the ends at 0. let v be 1.0. Also track the sum of the solution before and after the simulation. End the simulation after 4 seconds. Report the behavior with and without the CFL being satisfied. If the CFL is very small, do things improve. What about when it's just under 1.0. What's going on here. Try subtracting <math>u^j_n</math> from both sides of the discretization and inspect for differences between the original discretization and Lax. See an extra term?<br />
#Try the ''leapfrog method'' for descretization<br />
:<math>u^{n+1}_j = u^{n-1}_j - \frac{v\Delta t}{\Delta x} (u^n_{j+1} - u^n_{j-1}).</math><br />
:Does this improve the numerical diffusion?<br />
<br />
==Final program and exercise==<br />
Bring the descretization schemes for diffusion and convection to solve the convection-diffusion equation explicitly, with finite differences. Consider a non-dimensional form of the equation.<br />
<br />
:<math>\frac{\partial \phi}{\partial t} + u \phi - \frac{\partial }{\partial x}\mathrm{Pe}^{-1} \frac{\partial }{\partial x} \phi = q </math><br />
<br />
The <math>\mathrm{Pe}^{-1} </math>, the inverse of the [[Wikipedia:Peclet number|Peclet number]], the ratio of the velocity scale <math>U</math> times the length scale <math>L</math> to the diffusivity <math>D</math>, <br />
:<math> \mathrm{Pe} = \frac{UL}{D}.</math><br />
*On a unit domain, specify <math>\phi_(t,0)</math>= .2, <math>\phi_(t,1)</math>=1, and Pe = 10.<br />
*Compare the solution to <math>c_a(x)</math>, the analytic solution of the equation, is<br />
:<math> c_a(x) = a + (b-a)*\frac{\exp((x-1)\mathrm{Pe}) - \exp(\mathrm{-Pe})}{1-\exp(\mathrm{-Pe})} </math><br />
* Experiment with the Peclet number and mesh resolution to determine how stable your numerical scheme is.</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Summer_Modeling_SchoolSummer Modeling School2009-08-04T03:08:37Z<p>Hoffman: </p>
<hr />
<div>[[Image:Portland.jpg|thumb|right|400 px|The summer ice sheet modeling school will be held in Portland Oregon, August 3-14, 2009]]<br />
<br />
==Overview==<br />
The Summer Modeling School will be an intensive Summer School that will bring current and future ice-sheet scientists together to develop better models for the projection of future sea-level rise (slr). The IPCC Fourth Assessment Report [http://www.ipcc.ch/ipccreports/ar4-syr.htm] acknowledged that current models do not adequately treat the dynamic response of ice sheets to climate change, and that this is the largest uncertainty in assessing potential rapid sea-level rise. Recognizing this, an ice-sheet modelling Workshop was held during the July 2008 SCAR/IASC [https://www.comnap.aq/content/events/osc2008] meeting, in St. Petersburg, Russia. This meeting developed a community strategy on how best to (i) improve the physical understanding of ice-sheet processes responsible for rapid change; (ii) incorporate improved physical understanding into numerical models; (iii) assimilate appropriate data into the models for calibration and validation; and (iv) develop prognostic whole ice-sheet models that better incorporate non-linear ice-sheet response to environmental forcing (such as change in surface mass balance, loss of buttressing from floating ice shelves and ice tongues, and rising sea level). <br />
<br />
The two-week Summer School is a first step towards implementing this strategy. It will bring scientists from differing backgrounds together and allow more extensive and in-depth interactions between the relevant scientific research communities. A series of general background lectures as well as discussions of more specialized and advanced topics during this Summer School will provide the foundation for cross-disciplinary research, particularly for early career scientists. We anticipate publication of lecture notes both in hard copy and on a dedicated home page, to provide the glaciological community with an up-to-date overview of the science and observational techniques that will serve to guide further research efforts. Direct beneficiaries will be young researchers; indirect beneficiaries will be coastal zone communities who will gain improved sea level change forecasts to underpin their plans for sustainable development.<br />
<br />
===Venue===<br />
The modeling school will be held on the campus of [[Wikipedia:Portland State University|Portland State University]] in [[Wikipedia:Portland, Oregon|Portland, Oregon]] August 3-14, 2009.<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=Portland+Airport&daddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&geocode=&hl=en&mra=ls&dirflg=r&date=07%2F28%2F09&time=8:59am&ttype=dep&noexp=0&noal=0&sort=&tline=&sll=45.54878,-122.629155&sspn=0.092445,0.144367&ie=UTF8&ll=45.548679,-122.619438&spn=0.092445,0.144367&z=13&start=0 Map] from airport to [http://cegs.pdx.edu/stay/upl/ University Place Hotel] using public transport (note that the directions in your travel letter are better than the Google generated instructions here).<br />
<br />
* [http://maps.google.com/maps?f=d&source=s_d&saddr=310+SW+Lincoln+St,+Portland,+OR+97201-5007+(University+Place-Portland)&daddr=1721+SW+Broadway,+Portland,+OR+97201+(Cramer+Hall)&hl=en&geocode=FdVhtgIdZwqw-CHO0mMQPCwi0Q%3BFRN3tgIdvP2v-CHxCBg32xEzXA&mra=ls&dirflg=w&sll=45.51029,-122.681675&sspn=0.005782,0.009023&ie=UTF8&ll=45.510091,-122.68232&spn=0.005782,0.009023&z=17 Map] from [http://cegs.pdx.edu/stay/upl/ University Place Hotel] to [http://www.pdx.edu/campus-map Cramer Hall].<br />
<br />
=== Student Participants ===<br />
<br />
*[[Student Presentation]]<br />
<br />
===Lectures and Planned Activities===<br />
<br />
For information about editing this page, see [[Wikipedia:How to edit]].<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
|-valign="top" style="background:RoyalBlue"<br />
!width="20%"|Dates<br />
!width="25%"|Lecture Topics<br />
!width="15%"|Lecturers<br />
!width="25%"|Laboratory Topics<br />
!width="15%"|Laboratory Instructors <br />
|-valign="top" style="background:AliceBlue"<br />
| [[4-5 August]]<br />
| Introduction to and theoretical basis for ice sheet modeling. [[Basal Conditions]]. [[Modelling mountain glaciers]].<br />
| Kees van der Veen, [[Nina Kirchner]], Alan Rempel, and Brian Anderson<br />
| [[Finite differencing|Finite differencing]] and [[Pragmatic Programming|pragmatic programming]] using Fortran[http://en.wikipedia.org/wiki/Fortran] 95...<br />
computing divergence and gradient...<br />
from conservation equation to matrix algebra...<br />
rheology and that which makes ice ice...<br />
simple, ideal models...<br />
that which makes ice-sheet modeling hard...<br />
| Gethin Williams, Ian Rutt, [[Jesse Johnson]]<br />
|-valign="top" style="background:PowderBlue"<br />
| 6 August<br />
| The world of [[ice shelves]] and 'distributed stress-field solutions'<br />
| Todd Dupont and Olga Sergienko<br />
| Linear Algebra of ice-sheet modeling, relaxation methods, finite-element methodology, solution of Laplace equation in arbitrary domain, creation of an ice-shelf flow-field model (snap shot of flow field), Models of the Ross Ice Shelf<br />
| Olga Sergeinko and Todd Dupont<br />
|-valign="top" style="background:AliceBlue"<br />
| 7 August<br />
| [[Data sets for ice sheet modeling]]<br />
| Slawek Tulaczyk and Ken Jezek<br />
| COMSOL Multiphysics<br />
| Olga Serginko and Jesse Johnson<br />
|-valign="top" style="background:PowderBlue"<br />
| 8 August<br />
| [[Student Presentation]]<br />
| Modeling School Students<br />
| open work day with breakfast & presentations in the morning<br />
| go to the farmer's market<br />
|-valign="top" style="background:AliceBlue"<br />
| 9 August<br />
| Free day; possible PDX tour<br />
|<br />
|<br />
|<br />
|-valign="top" style="background:PowderBlue"<br />
| 10 August<br />
| Excursion to Mt. Hood and [[Eliot Glacier field trip]]<br />
| Guided by [http://web.pdx.edu/~basagic/ Hassan Basagic]<br />
|<br />
|<br />
|-valign="top" style="background:AliceBlue"<br />
| 11 August<br />
| [[Quantifying model uncertainty]]<br />
| Charles Jackson and Patrick Heimbach<br />
| Uncertain lab, [[Dynamic response to the enhanced basal flow in the Greenland ice sheet]] Weli Wang<br />
| Charles Jackson, Patrick Heimbach, and Weli Wang<br />
|-valign="top" style="background:PowderBlue"<br />
| [[12-13 August]]<br />
| Introduction to Glimmer-CISM ([[Introduction to Glimmer I|Part I]], [[Introduction to Glimmer II|Part II]] and [[Glimmer-CISM|Part III]]); [[Higher order velocity schemes|Higher-order models]]<br />
| Ian Rutt, [[Magnus Hagdorn]], [[Stephen Price]], Bill Lipscomb, [[Jesse Johnson]]<br />
| Software development and [[Adding a module to Glimmer I|creating a module for Glimmer]], [[representing and manipulating data]]. [[Grounding line treatments]], presented by Sophie Nowicki. [[Verifying ice sheet models]], presented by Aitbala Sargent<br />
| Ian Rutt, [[Magnus Hagdorn]], Gethin Williams, Stephen Price, Bill Lipscomb, [[Jesse Johnson]]<br />
|-valign="top" style="background:AliceBlue"<br />
| 14 August<br />
| [[Coupling the Cryosphere to other Earth systems]]<br />
| Bill Lipscomb and Ian Rutt<br />
| Community Climate System Model (CCSM) Lab<br />
| Bill Lipscomb, [[Jesse Johnson]], Stephen Price and Ian Rutt<br />
|}<br />
<br />
====[[Typical Daily Schedule]]====<br />
<br />
===Resources===<br />
<br />
Additional student/instructor resources for the Summer School:<br />
* List of [[Computing Resources and Room Description]]<br />
* Details of [[Eliot Glacier field trip]]<br />
* An outline [[Reading List]]<br />
<br />
===Application and Registration===<br />
''The window for receipt of student applications has closed. Thank you for your interest in the program. ''<br />
<br />
The registration fee for the course is US $350.<br />
<br />
===Funding Agencies===<br />
<br />
<br />
{|<br />
|-valign="top"<br />
|[[Image:iscu.jpg|300 px]]<br />
|[[Image:scar.jpg|150 px]]<br />
|-valign="top"<br />
|[[Image:wcrp.jpg|200 px]]<br />
|[[Image:nsf_logo.gif|300px]]<br />
|-valign="top"<br />
|[[Image:cresis.jpg|100 px]]<br />
|[[Image:cires.jpg|350 px]]<br />
|-valign="top"<br />
|[[Image:IASC_logo_07_RGB.jpg|100 px]]<br />
|}<br />
<br />
===Organizing Committee===<br />
Christina Hulbe, Jesse Johnson, Cornelis van der Veen</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Typical_Daily_ScheduleTypical Daily Schedule2009-07-31T19:25:45Z<p>Hoffman: </p>
<hr />
<div>==A typical day==<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
|-valign="top" style="background:RoyalBlue"<br />
!width="15%"|Time<br />
!width="15%"|What's happening?<br />
!width="15%"|Location <br />
<br />
|-valign="top" style="background:AliceBlue"<br />
| 7:30-8:00<br />
| Breakfast<br />
| Cramer Hall 17<br />
|-valign="top" style="background:PowderBlue"<br />
| 8:00-10:00<br />
| Morning Lectures I<br />
| Cramer Hall, Room 1<br />
|-valign="top" style="background:AliceBlue"<br />
| 10:00-10:30<br />
| Coffee Break<br />
| Cramer Hall outside Room 1<br />
|-valign="top" style="background:PowderBlue"<br />
| 10:30-12:00<br />
| Morning Lectures II<br />
| Cramer Hall 1<br />
|-valign="top" style="background:AliceBlue"<br />
| 12:00-13:30<br />
| Lunch<br />
| <br />
|-valign="top" style="background:PowderBlue"<br />
| 13:30-15:30<br />
| Afternoon Activities I<br />
| Cramer Hall 1<br />
|-valign="top" style="background:AliceBlue"<br />
| 15:30-16:00<br />
| Coffee Break<br />
| Cramer Hall outside Room 1<br />
|-valign="top" style="background:PowderBlue"<br />
| 16:00-17:30<br />
| Afternoon Activities II<br />
| Cramer Hall 1<br />
|}<br />
<br />
==Special Events==<br />
<br />
* Please join us at 6 pm August 3rd for a welcome dinner at Hot Lips Pizza. The address is 1909 SW 6th Avenue, between College and Hall Streets, just a few blocks from University Place. <br />
* Breakfast will be provided in the Geology Department lounge every class meeting day.<br />
* Morning and afternoon coffee breaks will be provided each class day. If you want coffee before this, we recommend Cafe Ono on 5th between Hall and Harrison (where you can find Stumptown Coffee).<br />
* Catered lunches will be provided on Tuesday the 4th and Tuesday the 11th.<br />
* Saturday will be a working day, with the computer room open.<br />
* Sunday will be a day off with a PDX tour if there is interest.<br />
* Monday will be an excursion to Mt. Hood, Eliot Glacier.<br />
* There will be a final group dinner on Friday the 14th at Nancy's home (along the MAX line to the airport).</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Typical_Daily_ScheduleTypical Daily Schedule2009-07-31T19:08:02Z<p>Hoffman: </p>
<hr />
<div>==A typical day==<br />
<br />
{| border="1" cellpadding="5" cellspacing="0"<br />
|-valign="top" style="background:RoyalBlue"<br />
!width="15%"|Time<br />
!width="15%"|What's happening?<br />
!width="15%"|Location <br />
<br />
|-valign="top" style="background:AliceBlue"<br />
| 7:30-8:00<br />
| Breakfast<br />
| Cramer Hall 17<br />
|-valign="top" style="background:PowderBlue"<br />
| 8:00-10:00<br />
| Morning Lectures I<br />
| Cramer Hall, Room 1<br />
|-valign="top" style="background:AliceBlue"<br />
| 10:00-10:30<br />
| Coffee Break<br />
| Cramer Hall outside Room 1<br />
|-valign="top" style="background:PowderBlue"<br />
| 10:30-12:00<br />
| Morning Lectures II<br />
| Cramer Hall 1<br />
|-valign="top" style="background:AliceBlue"<br />
| 12:00-13:30<br />
| Lunch<br />
| <br />
|-valign="top" style="background:PowderBlue"<br />
| 13:30-15:30<br />
| Afternoon Activities I<br />
| Cramer Hall 1<br />
|-valign="top" style="background:AliceBlue"<br />
| 15:30-16:00<br />
| Coffee Break<br />
| Cramer Hall outside Room 1<br />
|-valign="top" style="background:PowderBlue"<br />
| 16:00-17:30<br />
| Afternoon Activities II<br />
| Cramer Hall 1<br />
|}<br />
<br />
==Special Events==<br />
<br />
* Please join us at 6 pm August 3rd for a welcome dinner at Hot Lips Pizza. The address is 1909 SW 6th Avenue, between College and Hall Streets, just a few blocks from University Place. <br />
* Breakfast will be provided in the Geology Department lounge every class meeting day.<br />
* Morning and afternoon coffee breaks will be provided each class day. If you want coffee before this, we recommend Cafe Ono on 5th between Hall and Harrison (where you can findl Stumptown Coffee).<br />
* Catered lunches will be provided on Tuesday the 4th and Tuesday the 11th.<br />
* Saturday will be a working day, with the computer room open.<br />
* Sunday will be a day off with a PDX tour if there is interest.<br />
* Monday will be an excursion to Mt. Hood, Eliot Glacier.<br />
* There will be a final group dinner on Friday the 14th at Nancy's home (along the MAX line to the airport).</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/User:HoffmanUser:Hoffman2009-07-31T18:42:51Z<p>Hoffman: </p>
<hr />
<div>Matt Hoffman<br />
<br />
Ph.D. Student<br />
Department of Geology<br />
Portland State University<br />
<br />
http://web.pdx.edu/~hoffman/</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/User:HoffmanUser:Hoffman2009-07-31T18:42:20Z<p>Hoffman: New page: http://web.pdx.edu/~hoffman/</p>
<hr />
<div>http://web.pdx.edu/~hoffman/</div>Hoffmanhttp://websrv.cs.umt.edu/isis/index.php/Student_BiosStudent Bios2009-07-31T18:42:08Z<p>Hoffman: </p>
<hr />
<div>*[[User:Mankoff|Ken Mankoff]] will begin his PhD. this fall at UCSC and as such does not have a very well defined research topic. He will likely work on projects involving subglacial lakes and grounding lines. He is currently analyzing data from the terminal face of Pine Island Glacier, and oceanographic and sea ice data from the larger Amundsen Sea area.<br />
<br />
*[http://www.victoria.ac.nz/antarctic/people/jeremy-fyke/index.aspx Jeremy Fyke] is working on a PhD with the Antarctic Research Centre in Wellington, New Zealand. My project involves coupling an ice sheet model to an Earth System model 'of intermediate complexity' (the University of Victoria Earth System Climate Model) in order to have a go at simulating coupled climate/ice sheet interactions over millennial time scales.<br />
<br />
*[http://flo-colleoni.ifrance.com/ Florence Colleoni] will defend her Ph.D. in paleoclimate modeling at [http://www-lgge.obs.ujf-grenoble.fr/ LGGE] (Grenoble, Fr) in early September. She will then start a post-doctorate at the [http://www.cmcc.it/welcome-at-cmccs-web-site?set_language=en Centro Euro-Mediterraneo per i Cambiamenti Climatici] in Bologna (Italy) to couple the CISM Glimmer to the Earth System model composed of the AGCM of NCAR and of the OGCM NEMO. The final aim is to carry out transient paleoclimate simulations to understand and reproduce the interglacial/glacial transition mechanisms. This will be done in collaboration with NCAR. - My entire Ph.D. thesis is available [ftp://ftp-lgge.obs.ujf-grenoble.fr/pub/depot/florence/ here]-<br />
<br />
* [http://homepages.ucalgary.ca/~adhikars/ Surendra Adhikari] is currently in his second year of PhD at the University of Calgary, Canada. He is trying to develop a 3-D higher-order numerical ice-flow model applied to valley glaciers and alpine ice-fields. This HO-model will then be coupled to the traditional SIA-model to simulate the large ice sheets such as Greenland Ice Sheet.<br />
<br />
*[http://bigice.apl.washington.edu/people_poinar.html Kristin Poinar] is a second-year Ph.D. student at the University of Washington who is working on two "learning curve" ice sheet modelling projects. One is writing a thermal model to apply to the Greenland ice sheet, where surface lake drainages make basal thermodynamics interesting; the second is your standard model-perturbations-at-the-terminus study, on Petermann Glacier in NW Greenland.<br />
<br />
*[[User:adamc|Adam Campbell]] is entering a PhD program at the University of Washington in Fall 2009. I have just completed a Masters Degree in Geology at Portland State University where I examined the physics of the reaction of Crane Glacier to the disintegration of the Larsen B Ice Shelf using a steady state 2-D flow model with a basal sliding law. I am presently investigating structures on the Kamb Ice Shelf to determine if they were developed by a pinch and swell mechanism. I am also uncomfortable writing about myself in the third person.<br />
<br />
*[[User:papplega|Patrick Applegate]]: I am a glacial geomorphologist and geochronologist with a taste for modeling. My Ph. D. work involves the use of geomorphic process modeling to parse out the real meaning of cosmogenic exposure dates from moraines. I am asymptotically approaching the completion of my Ph. D. at Penn State. I'm attending the Summer School because I anticipate taking a new direction for my research in the near future.<br />
<br />
*[[User:hoffman|Matt Hoffman]] is in his fifth and final (?) year of a PhD at Portland State University. I am developing a spatially-distributed energy balance model for the glaciers of the McMurdo Dry Valleys, Antarctica. The glaciers of the Dry Valleys are near the threshold of melt during summer, such that sublimation and melt are of similar magnitude. I anticipate the Summer School will develop my skills as a modeler and help me think about the relationships between surface mass balance and ice dynamics.</div>Hoffman