$$ \newcommand{\D}{\displaystyle} \renewcommand{\eqref}[1]{Eq.~(\ref{#1})} $$

 

 

 

7.4.1 Example: Practical usage of the von Neumann condition

In this example we will demonstrate how the von Neumann method may be used on the simple scheme (7.16) on the following form: $$ \begin{equation} E^{n+1}_j=D \, (E^n_{j+1}+E^n_{j-1})+(1-2 D) \, E^n_j \tag{7.46} \end{equation} $$

We have previously (see (7.42)) established how the error \( E^n_j \) is related with the numerical amplification factor: \( E^n_j=G^n\, e^{i\, \beta \, y_j} \) which upon substitution in (7.46) yields: $$ \begin{equation*} G^{n+1}e^{i\, \beta y_j}=D\, (G^n\, e^{i\, \beta y_{j+1}}+G^n\, e^{i\, \beta y_{j-1}}) + (1-2D)G^ne^{i\, \beta y_j} \tag{7.47} \end{equation*} $$

Note that \( G^n \) means \( G \) in the \( n \)-th power, whereas \( E^n_j \) denotes the error at time level \( n \) and the spatial location \( y_j \).

The expression in (7.47) is a nonlinear, \( n+1 \)-order polynomial in \( G \) which may be simplified by division of \( G^n\, e^{i\, \beta y_{j}} \): $$ \begin{equation} G=D \, (e^{i\, \beta h}+e^{-i\, \beta h})+(1-2D)=D \, (e^{i\delta}+e^{-i\delta})+(1-2D) \tag{7.48} \end{equation} $$

where we introduce \( \delta \) as: $$ \begin{equation} \delta = \beta h \tag{7.49} \end{equation} $$

By using terminology for periodical functions, \( \beta \) may be thought of as a wavenumber (angular frequency) and \( \delta \) as a phase angle. (See appendix A.3 in Numeriske Beregninger)

For further simplification we introduce some standard trigonometric formulas: $$ \begin{equation} \begin{array}{l} 2\cos(x)=e^{ix}+e^{-ix}\\ i\, 2\sin(x)=e^{ix}-e^{-ix}\\ \cos(x)=1-2\sin^2(\frac{x}{2}) \end{array} \tag{7.50} \end{equation} $$

By substitution (7.50) in (7.48) we get the simpler expression: $$ \begin{equation} G=1-2D\, (1-\cos(\delta))=1-4D\, \sin^2\left(\frac{\delta}{2}\right) \tag{7.51} \end{equation} $$

As \( G \) is real, the condition \( |G|\leq 1 \) has the following mathematical interpretation: $$ \begin{align} -1\leq G\leq 1 \qquad \text{ or } \qquad -1\leq 1-4D\sin^2\left(\frac{\delta}{2}\right)\leq 1 \end{align} \tag{7.52} $$

The right hand side of (7.52) is always true as \( D \geq 0 \). For the left hand side of (7.52) we have $$ \begin{equation*} D\leq \frac{1}{2\sin^2(\frac{\delta}{2})} \end{equation*} $$

which is true for all \( \delta \) ( \( -\pi \leq \delta \leq \pi \)) when \( D\leq \frac{1}{2} \). A von Neumann condition for stability of (7.16) may then be presented as: $$ \begin{equation} 0 < D < \frac{1}{2} \tag{7.53} \end{equation} $$

where \( D=\frac{\Delta t}{(\Delta y)^2} \) from (7.17).

As (7.16) is a two-level scheme with constant coefficients, the condition in (7.53) is both sufficient and necessary for numerical stability. This condition agrees well in the previous condition in (7.21), which is sufficient only.

The stability condition impose a severe limitation on the size of the timestep \( \Delta t \), which of course influence the CPU-time.

A rough estimate of the CPU-time for the FTCS-schemes with constant \( D \)-value yields: $$ \begin{equation*} \frac{T_2}{T_1}\approx \left(\frac{h_1}{h_2}\right)^3 \end{equation*} $$

where \( T_1 \) and \( T_2 \) are the CPU-times for \( \Delta y=h_1 \) and \( \Delta y=h_2 \), respectively. For example for a reduction in spatial resolution from \( h_1=0.1 \) by a factor 10 to \( h_2=0.01 \) we get an increase in CPU-time by a factor \( \frac{T_2}{T_1}=1000 \).

7.4.1.1 Differentiation to find stability conditions

An alternative method to find extremal values (i.e. max/min) for \( G \) as a function of \( \delta \), is to compute$\frac{dG}{d\delta}$ and then set it to \( \frac{dG}{d\delta}=0 \). A stability condition may then be found from the criterion \( |G| < 1 \). For the current example we have from (7.51): $$ \begin{equation*} G=1-2D\, (1-cos(\delta)) \qquad \Rightarrow \qquad \frac{dG}{d\delta}=-2D\, \sin(\delta) \end{equation*} $$

which has extremal values for \( \delta=0,\\ \delta=\pm \pi \), \( \delta = 0 \) gives \( G=1 \), whereas \( \delta=\pm \pi \) yields the condition \( G=1-4D \). Finally, the condition \( |G|\leq 1 \) yields: $$ \begin{equation*} -1\leq 1-4D\leq 1 \end{equation*} $$

The right hand side of the inequality will always be satisfied for positive \( D \), whereas the left hand side yields \( D\leq \frac{1}{2} \) as before.

In many situations we will find that \( \delta = \pm\pi \) are critical values, and it may therefore be wise to assess these values, but remember it might not be sufficient in order to prove stability. On the other hand these values might sufficient to prove instabilities for those values, as the condition \( |G|\leq 1 \) must be satisfied for all \( \delta \)-values in the range \( [-\pi,\ \pi] \).

7.4.1.2 Comparison of the amplification factors

In this section we will compare the numerical amplification factor \( G \) in (7.51) with the analytical amplification factor \( G_a \) in (7.39)

By making use of (7.41) and (7.49) we may present the analytical amplification factor (7.39) as: $$ \begin{equation} \tag{7.54} G_a=\exp(-\delta^2\, D) \end{equation} $$

And from (7.51) we have an expression for the numerical amplification factor: $$ \begin{equation} G=1-2D\, (1-\cos(\delta))=1-4D\, \sin^2\left(\frac{\delta}{2}\right) \tag{7.55} \end{equation} $$

In figure 91 we plot \( G \) and \( G_a \) as functions of \( \delta \in [0,\ \pi] \) for selected values of \( D \). For small values of \( \delta \) we observe small differences between \( G \) and \( G_a \), with slightly larger values of \( G \) than for \( G_a \), with progressively increasing differences as a function of \( \delta \).

Figure 91: The amplification factors \( G \) (dotted) and \( G_a \) (solid) as a function of \( \delta \) for specific values of \( D \).

Further, we observe large errors for \( \delta \in [90^\circ,\ 180^\circ] \) and that the amplification factor \( G \) even has the wrong sign, which will lead to unphysical oscillations in the numerical solution.

The reason why the solution may still be usable is due to that the analytical solution has an amplitude \( G_a \) which diminishes strongly with increasing frequency (see the analytical solution in (7.13)). Such a smoothing effect is typical for parabolic PDEs. Yet the effect is noticeable as we in the current example have a discontinuity at \( y=0 \), such that the solution contains many high frequent components.

Errors in the amplitude is commonly quantified with \( \varepsilon_D =\left|\frac{G}{G_a}\right| \) and denoted diffusion error or dissipation error. No diffusion error corresponds to \( \varepsilon_D=1 \). The term diffusive scheme will normally refer to a numerical scheme with decreasing amplitude with increasing t. For our FTCS-scheme applied for the diffusion equation we have: $$ \begin{equation} \varepsilon_D=\left|1-4D\sin^2(\delta/2)\right|\, \exp(\delta^2\, D) \tag{7.56} \end{equation} $$

The expression in (7.56) may be simplified by a Taylor-expansion: $$ \begin{equation*} \varepsilon_D=1-D^2\delta^4/2+D\delta^4/12+O(\delta^6) \end{equation*} $$

which confirms that the dissipation error is small for low frequencies when \( D\leq1/2 \).