Finite differencing: Introduction
(There is a PDF of the slides relating to this section available here, but this page contains all the material and more...)
Finite differences work by dividing a continuous variable up into a succession of points, or nodes. For simplicity, we'll think about equally-spaced nodes in two spatial dimensions, and evolving in time. We can define a set of nodes in time and space:
where , and are the indices of the nodes, and are the distances between the nodes (the grid-spacing), and is the timestep. A continuous function can be mapped onto the grid as follows:
For convenience, the discrete function can be noted using subscripts for the spatial indices, and a superscript for time:
This may seem like a potential source of confusion, but the equations we're going to be solving don't generally involve higher powers of the quantities being differentiated. In practice, this compact notation is very handy!
My first finite difference expression
The nice thing about finite differences is that they're very intuitive. Consider a first-order spatial derivative:
If we had a plot of in front of us, and we wanted to calculate a rough-and-ready value of this derivative, we might read off the values of at two nearby points (let's call them and ), and calculate the gradient from that:
Of course, these two points could be on the grid we defined earlier, and written using our compact notation:
So, there we have it: our first finite difference expression. But wait — there are some unanswered questions:
- We've calculated a derivative, but where is it valid? Is this the slope at point or point ? Or somewhere in between?
- We know this is an approximation to the real value of the derivative, but how much of one? Can we quantify its accuracy?
To answer these questions, we need to look at finite differences in a bit more depth, using Taylor's Theorem
Deriving Finite Differences
Taylor's Theorem allows the approximation of a differentiable function at a given point by a polynomial whose coefficients depend on the function's derivatives at that point. For a differentiable function , Taylor's Theorem is:
Here, is the point where the derivatives are evaluated, and is the point of interest. is the truncation error, since the series given here is of finite length. For most finite difference applications, second-order accuracy is sufficient. We can rewrite the series in terms of our grid, truncating the expression at the second-order term:
The truncation error has been rewritten to show that it is a third-order term. To get a second-order discretization of our spatial derivative, we also need the Taylor Series for :
By subtracting the second equation from the first, we arrive at
which can be rearranged finally to give
So, for this finite difference expressions, we have answered both questions posed above using Taylor's Theorem: we know where this approximation is valid (at ) and how accurate it is (second-order accurate, with 'error' terms of order ).
Stencils, and more derivatives
The expression we've just derived is a centered-difference expression, as the points it relies upon ( and ) lie either side of the point where we're evaluating the derivative (). This set of points is known as the expression's stencil. It's important to realise that many different stencils can be used to calculate a given derivative, and each entails manipulating Taylor Series expressions in a different way. With this in mind, we return to the original finite difference expression we devised:
As we noted above, there's no indication in the expression as to where the derivative is valid. It turns out that this is important with regard to its accuracy. First, let's consider this as an off-centred expression, evaluated at :
We can easily rearrange this into the form of a Taylor Series:
By comparing with the Taylor Series expressions given earlier, it becomes clear that this has one fewer terms: the truncation error is second-order, and so the expression is of first-order accuracy:
However, we can also think of the derivative being evaluated halfway between and :
Of course, this is now a centered difference, and can be constructed in a similar manner to the centred difference expression given previously. For similar reasons, it is also second-order accurate.
Second derivatives and associated error terms are also computed from Taylor's series, this time considering sums of series
Adding the series and solving for gives
A comprehensive list of these formula, and an interesting discussion is found at .
So, having determining the stencil and the level of accuracy we need, we can use Taylor's Theorem to construct suitable finite difference expressions. This is especially important in situations where we might require a special stencil, such as at the edge of the model domain: more on this later. We can also construct finite-difference expressions for grids with non-constant spacing (i.e. where varies).
Summary of second-order-accurate FD expressions
First derivatives, second-order accurate, equal spacing
Second derivatives, second-order accurate, equal spacing
First derivatives, ?-order accurate, unequal spacing
Second derivatives, ?-order accurate, unequal spacing
PDEs are typically classified by their characteristics. This scheme is so common that I think it needs to be mentioned. Assuming , the general second-order PDE in two independent variables has the form
where the coefficients A, B, C etc. may depend upon x and t. This form is analogous to the equation for a conic section:
Just as one classifies conic sections into parabolic, hyperbolic, and elliptic based on the discriminant , the same can be done for a second-order PDE at a given point.
- : solutions of elliptic partial differential equation are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be approximated with elliptic PDEs.
- : equations that are parabolic partial differential equation at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases.
- : hyperbolic partial differential equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs.
- Durran, D. R. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer
Other parts of finite differencing
The Lab Classes on Finite Differencing are split up into the following parts: