$$\newcommand{\D}{\displaystyle} \renewcommand{\eqref}{Eq.~(\ref{#1})}$$

# 4.2 Errors and stability

In what follows we refer to the numerical method introduced in the previous section. This section is based in , the reader is referred to that book for further details.

In order to discuss about the approximation error introduced by a numerical scheme, the first step is to define what one means by error. Let us denote $$\hat{Y}$$ the vector of exact solutions at grid points, i.e. \begin{align} \hat{Y} = [y(x_1),y(x_2),\dots,y(x_N)]^T\, \tag{4.7} \end{align} and the error function $$E$$ \begin{align} E = Y - \hat{Y}\,. \tag{4.8} \end{align}

Once the error has been define we will concentrate in determining how big this error is. We can do that by using some norm for vector $$E$$. For example, we can use the max-norm \begin{align} \|E\|_{\infty} = \max_{1 \leq i \leq N} |e_i| = \max_{1\leq i \leq N} |y_i - y(x_i)|\,, \tag{4.9} \end{align} the 1-norm \begin{align} \|E\|_{1} = h \sum^{N}_{i=1} |e_i| \tag{4.10} \end{align} or the 2-norm \begin{align} \|E\|_{2} = \left(h \sum^{N}_{i=1} |e_i|^2\right)^{1/2}\,. \tag{4.11} \end{align}

Now our goal is to obtain a bound on the magnitude of the error vector $$E$$, so that, for example, if we can show that $$\|E\|_\infty = \mathcal{O}(h^2)$$, then it follows that the error in each point of the grid must be $$\mathcal{O}(h^2)$$ as well. In order to achieve this goal, we will first define the local truncation error (LTE) of the numerical scheme and then, via stability, show that the global error can be bounded by the LTE.

## 4.2.1 Local truncation error

We have already study the LTE for initial value problems in the section 2.12 Basic notions on numerical methods for IVPs. The LTE for the finite difference scheme (4.4) is obtained by replacing $$y_i$$ with the true solution $$y(x_i)$$. In general, the exact solution $$y(x_i)$$ will not satisfy the finite difference equation. This difference is called LTE and denoted as \begin{align} \tau_i = \frac{1}{h^2} (y(x_{i-1}) - 2 y(x_i) + y(x_{i+1})) - f(x_i) \tag{4.12} \end{align} for $$i=1,2,\dots,N$$. Since we in practice do not know the exact solution. However, if we assume that it is smooth, we can then use Taylor series expansions, which results in \begin{align} \tau_i = (y''(x_i)+\frac{1}{12} h^2 y''''(x_i) + \mathcal{O}(h^4) ) - f(x_i)\,. \tag{4.13} \end{align} Furthermore, by using the original differential equation (4.1), the LTE becomes \begin{align} \tau_i = \frac{1}{12} h^2 y''''(x_i) + \mathcal{O}(h^4)\,. \tag{4.14} \end{align} Finally, we note that even if $$y''''$$ is in general unknown, it is some fixed function independent of $$h$$, so that $$\tau_i = \mathcal{O}(h^2)$$ as $$h \to 0$$.

Finally, we note that we can define a vector form of the LTE as \begin{align} \tau = A \hat{Y} - F\,, \tag{4.15} \end{align} and so \begin{align} A \hat{Y} = F + \tau\,. \tag{4.16} \end{align}

## 4.2.2 Global error

We obtain a relation between LTE and global error by subtracting (4.16) from (4.5), obtaining \begin{align} A E = - \tau\,. \tag{4.17} \end{align} This is just a matrix form of the system of equations \begin{align} \frac{1}{h^2}(e_{i-1} - 2 e_i + e_{i+1}) = -\tau_i \textrm{ for } i=1,2,\dots,N\, \tag{4.18} \end{align} with boundary conditions $$e_0=e_{N+1}=0$$, because we are prescribing the solution at both ends. We can interpret (4.18) as the discretization of the ODE \begin{align} e''(x) = -\tau(x) \tag{4.19} \end{align} with boundary conditions $$e(0)=0$$ and $$e(1)=0$$. Since $$\tau(x) \approx \frac{1}{12}h^2 y''''(x)$$, integrating twice shows that the global error should be roughly \begin{align} e(x) \approx -\frac{1}{12} h^2 y''(x) + \frac{1}{12} h^2 (y''(0)+x(y''(1)-y''(0))) \tag{4.20} \end{align} and then the error should be $$\mathcal{O}(h^2)$$.

## 4.2.3 Stability

In the above made considerations we have assumed that solving the difference equations results in a decent approximation to the solution of the underlying differential equations. But since this is exactly what we are trying to prove, so that the reasoning is rather circular. Let us pursue a different approach and look at the discrete system \begin{align} A^h E^h = - \tau^h\,, \tag{4.21} \end{align} where we have used the superscript $$h$$ to indicate that these quantities depend on $$h$$. If we define $$(A^h)^{-1}$$ as the inverse of $$A^h$$, then we can write \begin{align} E^h = - (A^h)^{-1} \tau^h\, \tag{4.22} \end{align} and taking the norms gives \begin{align} \|E^h\| & = \|(A^h)^{-1} \tau^h\|\, \tag{4.23}\\ & \leq \|(A^h)^{-1}\| \|\tau^h\|. \tag{4.24} \end{align} We already know that $$\|\tau^h\|=\mathcal{O}(h^2)$$ and we want to verify that this is also true for $$\|E^h\|$$. Therefore, we need that $$\|(A^h)^{-1}\|$$ is bounded by some constant independent of $$h$$ as $$h \to 0$$, in other words \begin{align} \|(A^h)^{-1}\| \leq C \tag{4.25} \end{align} for all $$h$$ sufficiently small. If that is true, we will then have that \begin{align} \|E^h\| \leq C \|\tau^h\| \tag{4.26} \end{align} with the consequence that $$\|E^h\|$$ goes to zero at least as fast as $$\|\tau^h\|$$.

## 4.2.4 Consistency

A method is said to be consistent with the differential equation and boundary conditions if \begin{align} \|\tau^h\| \to 0 \textrm{ as } h \to 0\,. \tag{4.27} \end{align}

## 4.2.5 Convergence

We are finally in the position of giving a proper definition of convergence. We say that a method is convergent if $$\|E^h\| \to 0$$ as $$h \to 0$$. Combining the concepts introduced above, we can say that \begin{align} \text{consistency } + \text{ stability} \Longrightarrow \text{convergence.} \tag{4.28} \end{align}

Lax equivalence theorem A consistent finite difference method for a well-posed linear BVP is convergent if and only if it is stable.

## 4.2.6 Stability - Example in 2-norm

We conclude this section with an example on how to verify the stability of a finite difference scheme for BVP. We consider the scheme used until now as example, i.e. (4.5).Since $$A$$ in (4.5) is symmetric, the 2-norm of $$A$$ is equal to its spectral radius \begin{align} \|A\|_2 = \rho(A) = \max_{1\leq p \leq N} |\lambda_p|\,, \tag{4.29} \end{align} where $$\lambda_p$$ refers to the $$p$$-th eigenvalue $$A$$. We further note that $$A^{-1}$$ is also symmetric, so that its eigenvalues are the inverses of the eigenvalues of $$A$$ \begin{align} \|A^{-1}\|_2 = \rho(A^{-1}) = \max_{1\leq p \leq N} |(\lambda_p|)^{-1} = \left(\min_{1\leq p \leq N} |\lambda_p|\right)^{-1}\,. \tag{4.30} \end{align} These considerations tell us that in order to verify the stability of the numerical scheme under investigation we just have to compute the eigenvalues of $$A$$ and show that they are bounded away from zero as $$h \to 0$$. In general this can be very trick, since we have an infinite set of matrices $$A$$ (because $$A$$ changes with $$h$$). However, in this case the structure is so simple that we can obtain a general expression for the eigenvalues.

We consider a single value of $$h=1/(N+1)$$. Then the $$N$$ eigenvalues of $$A$$ are \begin{align} \lambda_p = \frac{2}{h^2} (cos(p \pi h) - 1)\,, \tag{4.31} \end{align} for $$p=1,2,\dots,N$$. Moreover, the eigenvector $$y^p$$, corresponding to $$\lambda_p$$ has components $$y^p_i$$ for $$i=1,2,\dots,N$$ given by \begin{align} y^p_i = sin(p \pi i h)\,. \tag{4.32} \end{align} You can verify this by checking that $$A y^p = \lambda_p y^p$$.

It can be easily seen that the smallest eigenvalue of $$A$$, in magnitude, is \begin{align} \lambda_1 & = \frac{2}{h^2} (cos(\pi h) - 1) \tag{4.33}\\ & = \frac{2}{h^2} (-\frac{1}{2} \pi^2 h^2 + \frac{1}{24} \pi^4 h^4 + \mathcal{O}(h^6)) \tag{4.34}\\ & = -\pi^2 + \mathcal{O}(h^2)\,. \tag{4.35} \end{align} This is clearly bounded away from zero for $$h \to 0$$, which allows us to conclude that the scheme is stable in the 2-norm.