$$\newcommand{\D}{\displaystyle} \renewcommand{\eqref}{Eq.~(\ref{#1})}$$

# 4.3 Tridiagonal systems of algebraic equations

When finite difference discretization methods are applied on ordinary of partial differential equations, the resulting equation systems will normally have a distinct band structure. In particular, a tridiagonal coefficient matrix will normally be the result when a second order ODE is solved. The tridiagonal matrix has only three diagonals with non-zero elements (hence the name); one main diagonal with the two other on each side of this. Such a tridiagonal system may be represented mathematically as: $$\begin{equation} \tag{4.36} \begin{array}{l} b_1 x_1 + c_1 x_2 = d_1\\ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \\ a_i x_{i-1} + b_i x_i +c_i x_{i+1} = d_i\\ \ \ \ \ \ \ \ \ \ \ \ \ \cdots \\ a_N x_{N-1} + b_N x_N = d_N\\ i = 1,2,...N \ , a_1 = c_N = 0 \end{array} \end{equation}$$

or more convenient in a matrix representation: $$\begin{equation} \tag{4.37} \begin{bmatrix} b_1 & c_1 & & & & & & & \\ a_2 & b_2 & c_2 & & & & & & \\ & \cdot & \cdot & \cdot & & & & & \\ & & \cdot & \cdot & \cdot & & & & \\ & & & \cdot & \cdot & \cdot & & & \\ & & & & & & a_{N-1} & b_{N-1} &c_{N-1} \\ & & & & & & & a_N & b_N \end{bmatrix} \cdot \begin{bmatrix} x_1 \\ x_2 \\ \cdot \\ \cdot \\ \cdot \\ x_{N-1} \\ x_N \end{bmatrix} = \begin{bmatrix} d_1 \\ d_2 \\ \cdot \\ \cdot \\ \cdot \\ d_{N-1} \\ d_N \end{bmatrix} \end{equation}$$

The linear algebraic equation system (4.36) may be solved by Gauss-elimination, which has a particular simple form for tridiagonal matrices and is often referred to as the Thomas-algorithm. More details on the derivation may be found in appendix I of Numeriske beregninger (in Norwegian).

The Thomas-algorithm has two steps; an elimination step and a substitution step. In the elimination step, sub-diagonal elements are eliminated by summation of two consecutive appropriate scaled rows in the matrix, starting at the first row at the top. By completion of the elimination step, the last unknown $$x_N$$ may the be computed as $$x_N :=\frac{d_N}{b_N}$$.

In the equation for $$x_{N-1}$$ immediately above the last, there are only two unknowns, namely $$x_N$$ and $$x_{N-1}$$. But since we already have calculated $$x_N$$ in the elimination step, we substitute its value in this equation and may thereby find $$x_{N-1}$$. By moving on to the equation for $$x_{N-2}$$, there will be only two unknowns as well, $$x_{N-2}$$ and $$x_{N-1}$$, and since we have already calculated $$x_{N-1}$$, it may be substituted into this equation such that $$x_{N-2}$$ may be found. This procedure of back-substitution may be repeated until all the unknowns for the tridiagonal equation system is known. This step of the procedure is known as back substitution or simply substitution.

The algorithms for the two steps are listed outlined below in mathematical terms:

Elimination:
\begin{align} q_j&:=\frac{a_j}{b_{j-1}} \qquad \text{for} \quad j = 2,3\cdots,N \nonumber \\ b_j& := b_j - q_j \cdot c_{j-1} \tag{4.38} \\ d_j&:= d_j - q_j \cdot d_{j-1} \nonumber \end{align}

Back substitution:
\begin{align} x_N &:=\frac{d_N}{b_N} \nonumber \\ x_j &:= \frac{d_j - c_j \cdot x_{j+1}}{b_j} \qquad \text{for} \quad j = N-1, N-2, \cdots,1 \tag{4.39} \\ \tag{4.40} \end{align}

The Python-code for (4.38) and (4.39) is implemented in TDMA, which corresponds to tri in  , section 6.3 (see below).

def tdma(a, b, c, d):
"""Solution of a linear system of algebraic equations with a
tri-diagonal matrix of coefficients using the Thomas-algorithm.

Args:
a(array): an array containing lower diagonal (a is not used)
b(array): an array containing main diagonal
c(array): an array containing lower diagonal (c[-1] is not used)
d(array): right hand side of the system
Returns:
x(array): solution array of the system

"""

n = len(b)
x = np.zeros(n)

# elimination:

for k in range(1,n):
q = a[k]/b[k-1]
b[k] = b[k] - c[k-1]*q
d[k] = d[k] - d[k-1]*q

# backsubstitution:

q = d[n-1]/b[n-1]
x[n-1] = q

for k in range(n-2,-1,-1):
q = (d[k]-c[k]*q)/b[k]
x[k] = q

return x


Stability of (4.38) and (4.39) is satisfied subject to the following conditions: $$\begin{equation} \tag{4.41} \begin{array}{l} \mid b_1 \mid > \mid c_1 \mid > 0\\ \mid b_i \mid \ge \mid a_i \mid + \mid c_i \mid, a_i \cdot c_i \neq 0 \ , i = 2, 3, \cdots, N-1\\ \mid b_n \mid > \mid a_N\mid > 0 \end{array} \end{equation}$$

Matrices satisfying (4.41) are called diagonally dominant for obvious reasons, and strictly diagonally in case $$\ge$$ may be substituted with $$>$$ in (4.41). Pivoting during Gauss-elimination is not necessary for diagonally dominant matrices, and thus the band structure is preserved and the algorithm becomes less CPU-intensive. See appendix I in Numeriske beregninger for a proof of (4.41).

Notice that all coefficients in each row of (4.37) has the same index. However, the linear algebraic equation system (4.36) may also be presented: $$\begin{equation} \tag{4.42} \begin{array}{l} b_1 x_1 + c_2 x_2 = d_1\\ \ \ \ \ \ \ \ \ \ \cdots \\ a_{i-1} x_{i-1} + b_i x_i + c_{i+1} x_{i+1} = d_i\\ \ \ \ \ \ \ \ \ \ \cdots \\ a_{N-1} x_{N-1} + b_N x_N = d_N\\\\ i = 1,2,...N \ , a_1 = c_N = 0 \end{array} \end{equation}$$

or in matrix form: $$\begin{equation} \tag{4.43} \begin{bmatrix} b_1 & c_2 & & & & & & & \\ a_1 & b_2 & c_3 & & & & & & \\ & \cdot & \cdot & \cdot & & & & & \\ & & \cdot & \cdot & \cdot & & & & \\ & & & \cdot& \cdot & \cdot & & & \\ & & & & & & a_{N-2} & b_{N-1} & c_{N} \\ & & & & & & & a_{N-1} & b_N \end{bmatrix} \cdot \begin{bmatrix} x_1 \\ x_2 \\ \cdot \\ \cdot \\ \cdot \\ x_{N-1} \\ x_N \end{bmatrix} = \begin{bmatrix} d_1 \\ d_2 \\ \cdot \\ \cdot \\ \cdot \\ d_{N-1} \\ d_N \end{bmatrix} \end{equation}$$

We notice that in this notation the coefficients of each column in the matrix (4.43) has the same index.

The version in (4.43) may be deduced from (4.37) by subtraction of $$1$$ from the $$a$$-indices and addition of 1 for the $$c$$-indices. Commercial codes like Matlab store tridiagonal matrices on the form given in (4.43). We have implemented (4.43) in tridiag.

def tripiv(a, b, c, d):
"""Solution of a linear system of algebraic equations with a
tri-diagonal matrix of coefficients using the Thomas-algorithm with pivoting.

Args:
a(array): an array containing lower diagonal (a is not used)
b(array): an array containing main diagonal
c(array): an array containing lower diagonal (c[-1] is not used)
d(array): right hand side of the system
Returns:
x(array): solution array of the system

"""

n = len(b)
x = np.zeros(n)
fail = 0

# reordering

a = b
b = c
c = 0

# elimination:

l = 0

for k in range(0,n):
q = a[k]
i = k

if l < n-1:
l = l + 1

for j in range(k+1,l+1):
q1 = a[j]
if (np.abs(q1) > np.abs(q)):
q = q1
i = j
if q == 0:
fail = -1

if i != k:
q = d[k]
d[k] = d[i]
d[i] = q
q = a[k]
a[k] = a[i]
a[i] = q
q = b[k]
b[k] = b[i]
b[i] = q
q = c[k]
c[k] =c[i]
c[i] = q
for i in range(k+1,l+1):
q = a[i]/a[k]
d[i] = d[i]-q*d[k]
a[i] = b[i]-q*b[k]
b[i] = c[i]-q*c[k]
c[i] = 0

# backsubstitution

x[n-1] = d[n-1]/a[n-1]
x[n-2] = (d[n-2]-b[n-2]*x[n-1])/a[n-2]

for i in range(n-3,-1,-1):

q = d[i] - b[i]*x[i+1]
x[i] = (q - c[i]*x[i+2])/a[i]

return x


emfs /src-ch3/ #python #package TRIdiagonalSolvers.py @ git@lrhgit/tkt4140/src/src-ch3/TRIdiagonalSolvers.py;