Preconditioner for linear solvers (markdown test)

When solving a system of equations

Ax=b, \boldsymbol{A} \boldsymbol{x} = \boldsymbol{b},

iteratively (using, e.g., the conjugate gradient method), the convergence speed is usually determined by the condition number κ(A)\kappa(A), where

κ:=λmax(A)λmin(A).\kappa := \frac{\lambda_{\max}(\boldsymbol{A})}{\lambda_{\min}(\boldsymbol{A})} .

Thus, if λmin(A)1h2\lambda_{\min}(\boldsymbol{A}) \propto \tfrac{1}{h^2}, which happens for example when solving a Poison problem using a finite element approximation with meshsize hh, iterative solvers have a very hard time to solve the system efficiently.

Preconditioners

In order to achieve robustness w.r.t. hh, one strategy is to apply preconditioning to the linear system. This means that instead of solving Ax=b\boldsymbol{A} \boldsymbol{x} = \boldsymbol{b}, we now solve

M1Ax=M1b, \boldsymbol{M}^{-1} \boldsymbol{A} \boldsymbol{x} = \boldsymbol{M}^{-1} \boldsymbol{b} ,

where M\boldsymbol{M} is a positive-definite matrix. This system has the same solution but we choose M\boldsymbol{M} in such a way, that the preconditioned system matrix has a aa. Also M\boldsymbol{M} should be easy-to-compute.

Preconditoning is very important. Wikipedia, e.g., states for the CG method:

In most cases, preconditioning is necessary to ensure fast convergence of the conjugate gradient method.

In the best case, one can show that the κ(M1A)\kappa(\boldsymbol{M}^{-1} \boldsymbol{A}) is uniformly bounded in hh, i.e.

κ(M1A)C, \kappa(\boldsymbol{M}^{-1} \boldsymbol{A}) \le \mathsf{C} ,

where the constant C\mathsf{C} is independent of hh. In this case, we would have achieved full robustness w.r.t. hh.

2023-01-06