History of mathematicians
In this document we give some information of mathematicians which
work or names are used in the course wi4010
Numerical methods for large algebraic systems.
In order to be able to measure distances with respect to vectors
and matrices we have defined normed spaces. Most of the spaces used
in this course are Banach spaces
Stefan Banach (1892-1945) .
For the vector norms different choices are made. An important class
of norms are the so-called Hölder norms
Otto Ludwig Hölder (1859-1937).
For the special choice p=2 we get the Euclid norm
Euclid ( 295 b.c.e.- ).
With respect to matrix norms we consider norms derived from
vector norms. Furthermore we mention the Frobenius norm
George Ferdinand Frobenius (1849-1917).
Furthemore we use spaces where an inner product is
defined. Most of these spaces are Hilbert spaces
David Hilbert (1862-1943). Using the Hölder inequality we show
that the Euclid norm satisfies the Cauchy-Schwarz inequality
Augustin-Louis Cauchy (1789-1857) and
Karl Hermann Amandus Schwarz (1843-1921).
We start our description of direct methods for solving linear systems
with the Gauss elimination method
(
Carl Friedrich Gauss (1777-1855)). Thereafter we discuss the Cholesky
(
André Louis Cholesky (1875-1918))
method for symmetric positive definite matrices. Furthermore we show that much
work and memory can be saved in the case of band matrices.
An important
example of a banded system is the discretization of the Poisson
equation
Simeon Denis Poisson (1781-1840).
The standard iterative methods, which are used are the Gauss-Jacobi
and the Gauss-Seidel method.
Carl Friedrich Gauss (1777-1855)
is a very famous mathematician working on abstract and applied
mathematics.
(
Carl Gustav Jacobi (1804-1851))
is well known for instance for the Jacobian the determinant of the
matrix of partial derivatives. He has also done work on iterative
methods leading to the Gauss-Jacobi method.
(
Philipp Ludwig von Seidel (1821-1896)
)
Another iterative method is the Chebyshev method. This method is based
on orthogonal polynomials bearing the name of
Pafnuty Lvovich Chebyshev (1821-1894).
The Gauss-Jacobi and Gauss-Seidel method use a very simple polynomial
to approximate the solution. In the Chebyshev method an optimal
polynomial is used.
Nowadays the most popular iterative methods are Krylov subspace methods (
Aleksei Nikolaevich Krylov ( 1863- 1945)
and
a second link).
For symmetric matrices the resulting method is known as the Conjugate
Gradient method proposed by Hestenes and Stiefel.
A Krylov method is in general only usefull if it is combined with a
preconditioner. The first method with appears to converge very fast
with a preconditioner is the ICCG method proposed by Meijerink and
Van der Vorst in 1977.
Nowadays the most popular iterative methods are the preconditioned
Krylov methods (
Nikolai Mitrofanovich Krylov ( 1879- 1955)).
For symmetric matrices the resulting method is known as the conjugate
gradient method. For non symmetric matrices different generalizations
exists for instance Bi-CG, CGS (Sonneveld), Bi-CGSTAB
( Van der Vorst),
GCR (Elman),
GMRES (Saad),
GMRESR ( Van der
Vorst and
Vuik) etc.
In the GCR, GMRES and GMRESR method a orthogonal basis for the
Krylov subspace is used. This basis is constructed from an
arbitrary basis using the Gram-Schmidt method (
Jorgen Pedersen Gram (1850-1916) and
Erhard Schmidt (1876-1959))
For symmetric matrices, the Lanczos method
(
Cornelius Lanczos (1893-1974))
is used to approximate the
matrix by a small tridiagonal matrix. The eigenvalues of this matrix
are called the Ritz values. For a non-symmetric matrix the Arnoldi
method is used to transform the matrix to a small Hessenberg matrix.
Domain decomposition methods are very suitable for parallel computing.
The Schwarz method
(
Hermann Amandus Schwarz (1843-1921))
is a classical domain decomposition method.
Biographies index
Contact information:
Kees Vuik
Back to
home page or
Numerical methods for large algebraic systems
page
of Kees Vuik