$$ \newcommand{\D}{\displaystyle} \renewcommand{\eqref}[1]{Eq.~(\ref{#1})} $$

 

 

 

7.4 Stability analysis with von Neumann's method

The von Neumann analysis is commonly used to determine stability criteria as it is generally easy to apply in a straightforward manner. Unfortunately, it can only be used to find necessary and sufficient conditions for the numerical stability of linear initial value problems with constant coefficients. Practical problems may typically involve variable coefficients, nonlinearities and complicated boundary conditions. For such cases the von Neumann analysis may only be applied locally on linearized equations. In such situations the von Neumann analysis provides sufficient, but not always necessary conditions for stability [13]. Further, due to the basis on Fourier analysis, the method is strictly valid only for interior points, i.e. excluding boundary conditions.

In this section we will show how von Neumann stability analysis may be applied to the parabolic PDE: $$ \begin{equation} \frac{\partial u}{\partial t}=\frac{\partial^2u}{\partial x^2} \tag{7.24} \end{equation} $$

To motivate the rationale for the basis of the von Neumann analysis, we will start by a revisit on the analytical solution of (7.24) by the method of separation of variables. The aim of the method is to simplify the PDE to two ODEs which has analytical solutions. With this approach we assume that the solution may constructed by means of separation of variables as \( u(x,t) \), i.e. as a product of to \( f(t) \) and \( g(x) \), which each are only functions of time and space, respectively: $$ \begin{equation} u(x,t)=f(t)\, g(x) \tag{7.25} \end{equation} $$

We may now differentiate (7.25) with respect to time and space: $$ \begin{equation} \frac{\partial u }{\partial t}=\frac{d f(t)}{dt}\, g(x),\ \frac{\partial^2u}{\partial x^2}=f(t)\, \frac{d^2g(x)}{dx^2}\, \frac{1}{g(x)} \tag{7.26} \end{equation} $$

which upon substitution in (7.25) results in the following equation: $$ \begin{equation} \frac{df(t)}{dt}\, g(x)=f(t)\, \frac{d^2g(x)}{dx^2} \tag{7.27} \end{equation} $$ or more conveniently: $$ \begin{equation} \frac{df(t)}{dt}\, \frac{1}{f(t)}=\frac{d^2g(x)}{dx^2}\, \frac{1}{g(x)} \tag{7.28} \end{equation} $$

Observe that the left hand side of (7.28) is a function of \( t \) only, whereas the right hand side is a function of \( x \) only. As the both sides of (7.28) must be satisfied for arbitrary values of \( t \) and \( x \), the only possible solution is that both sides of the equation equals a constant, which we for convenience denote \( -\beta^2 \), thus our original PDE in (7.24) has been transformed to two ODEs: $$ \begin{align} \frac{df(t)}{dt}\, \frac{1}{f(t)} &= -\beta^2 \to \frac{df(t)}{dt}+\beta^2\, f(t)=0 \tag{7.29}\\ \frac{d^2g(x)}{dx^2}\, \frac{1}{g(x)} & =-\beta^2\to \frac{d^2g(x)}{dx^2}+\beta^2\, g(x)=0 \tag{7.30} \end{align} $$

The first ODE (7.29) is of first order and has the solution (verify by substitution): $$ \begin{equation} f(t)=e^{-\beta^2t} \tag{7.31} \end{equation} $$

whereas the second ODE is of second order with solution (7.30): $$ \begin{align} g(x)&=A\sin(\beta x)+B\cos(\beta x) \tag{7.32} \end{align} $$

such that a particular solution to (7.25) is found by the product of (7.31) and (7.32): $$ \begin{align} u(x,t)=e^{-\beta^2t}\, \left[A\sin(\beta x ) +B\cos(\beta x)\right] \tag{7.33} \end{align} $$

and since (7.25) is a linear PDE, the sum or superposition of solutions like (7.33) will also represent a solution: $$ \begin{equation*} u(x,t)= \sum^{m=\infty}_{m=0}e^{-\beta^2_mt}\, \left[A_m\sin(\beta_mx)+B_m\cos(\beta_mx) \right] \end{equation*} $$

The coefficients \( A_m,\ B_m \) and \( \beta_m \) may be determined from the initial conditions and the boundary conditions as demonstrated in Appendix G.6 of Numeriske Beregninger.

However, for the current purpose of demonstration of von Neumann analysis, two particular solutions suffice: $$ \begin{equation} u(x,t)=\left\{\begin{matrix} e^{-\beta^2t}\sin(\beta x)\\ e^{-\beta^2t}\cos(\beta x) \end{matrix}\right. \tag{7.34} \end{equation} $$

which may be presented in a more compact for by making use of Euler's formula: $$ \begin{equation} e^{i\, x}=\cos(x)+i\, \sin(x),\qquad i=\sqrt{-1} \tag{7.35} \end{equation} $$

The more compact form of (7.34) is then: $$ \begin{equation} u(x,t)=e^{-\beta^2t}\, e^{i\, \beta x}=e^{-\beta^2t+i\, \beta x} \tag{7.36} \end{equation} $$

Note. Both the real and the imaginary part of the complex (7.36) satisfy (7.24) and is therefore included in the somewhat general solution. For particular problem (7.36) will be multiplied with complex coefficients such that the solution is real.

By adopting the common notation: \( x_j=j\, \Delta x,\ j=0,1,2,\dots \) and \( t_n=n\, \Delta t,\ n=0,1,2,\dots \) for (7.36), the solution at location \( x_j \) and time \( t_n \) is: $$ \begin{equation} u(x_j,t_n)=e^{-\beta^2t_n}\, e^{i\, \beta x_j}=e^{-\beta^2\, n\, \Delta t}\, e^{i\beta x_j}=(e^{-\beta^2\, \Delta t})^n\, e^{i\beta x_j} \tag{7.37} \end{equation} $$

and the solution at location \( x_j \) at the next timestep \( n+1 \) is: $$ \begin{equation} u(x_j,t_{n+1})=e^{-\beta^2t_{n+1}}\, e^{i\, \beta x_j}=e^{-\beta^2\, (n+1)\, \Delta t}\, e^{i\beta x_j}=(e^{-\beta^2\, \Delta t})^{n+1}\, e^{i\beta x_j} \tag{7.38} \end{equation} $$

If we divide (7.37) with (7.38), the spatial dependency vanishes and a we get a spatially independent expression for how the solution is amplified (or damped): $$ \begin{equation} \tag{7.39} G_a = \frac{u(x_j,t_{n+1})}{u(x_j,t_n)}=e^{-\beta^2\Delta t} \end{equation} $$

For this reason, \( G_a \) is commonly denoted the analytical amplification factor. (See section 1.6.1 i Numeriske Beregninger). In our particular case we find that \( G_a < 1 \). Having introduced the amplification factor \( G_a \) we may rewrite (7.37) as: $$ \begin{equation} u(x_j,t_n)=(G_a)^n\, e^{i\, \beta x_j}\equiv G_a^n\, e^{i\, \beta x_j} \tag{7.40} \end{equation} $$

For the current problem \( G_a < 1 \), and consequently \( G^n_a\to 0 \) for \( n\to\infty \).

As shown previously in (7.16), the following difference equation for the solution of (7.24) may be obtained: $$ \begin{equation} u_j^{n+1}=D \, (u^n_{j+1}+u^n_{j-1})+(1-2 D)u_j^n,\qquad D=\frac{\Delta t}{(\Delta x)^2} \tag{7.41} \end{equation} $$

Clearly, the solution of the difference equation (7.41) will deviate from the analytical solution to the differential equation for finite values of \( \Delta x \) and \( \Delta t \). The deviation/error will increase for increasing values of \( \Delta x \) and \( \Delta t \) and we have seen that when \( D > \frac{1}{2} \), the difference equation becomes unstable, with constantly increasing amplitudes and alternating signs. As (7.40) is a solution of the differential equation, we introduce a similar expression for the difference equation (7.41): $$ \begin{equation} \tag{7.42} u^n_j\to E^n_j=G^n\, e^{i\, \beta x_j} \end{equation} $$

where we have introduced the numerical amplification factor \( G \): $$ \begin{equation} G=\frac{E^{n+1}_j}{E^n_j} \tag{7.43} \end{equation} $$

which may be complex and is a function and \( \Delta t \) and \( \beta \). From (7.43) we may relate the error \( E^n_j \) at the \( n \)-th timestep with the initial error \( E^0_j \): $$ \begin{equation} E^n_j=G^n\, E_j^0,\ E^0_j=e^{i\, \beta \, x_j} \tag{7.44} \end{equation} $$

Given that \( G_a \) is derived from the analytical solution of the differential equation, and that \( G \) is derived from the difference equation approximating the same differential equation, we can not expect perfect agreement between the two factors. We will discuss this further in 7.4.1 Example: Practical usage of the von Neumann condition and illustrated in Figure 91.

From (7.44) we find a that if \( E^n_j \) is to be limited and not grow exponentially the following condition must be satisfied, which is denoted:

The strict von Neumann condition. $$ \begin{equation} |G|\leq 1 \tag{7.45} \end{equation} $$

The condition (7.45) is denoted strict as no increase in amplitude is allowed for. This condition will be relaxed in section (7.5.4 Generalized von Neumann stability analysis).

Even though we have demonstrated the von Neumann method for a relatively simple diffusion equation, the method is applicable for more generic equations. The method is simple:

The strict von Neumann method. Some properties of the condition:
  1. The linear difference equation must have constant coefficients. In case of variable coefficients, the condition may be applied on
linearized difference equations with frozen coefficients locally. Based on experience this results in a necessary condition for stability.
  1. The criterion do not consider the effect of boundary conditions as
it is based on periodic initial data. If the impact of boundary conditions is to be investigated on may use matrix methods for the coefficient matrix of the difference scheme [5].