Themes and Motivation
The innermost computational kernel of many
large-scale scientific applications and industrial numerical
simulations is often a large sparse matrix problem, which typically
consumes a significant portion of the overall computational time
required by the simulation. Many of the matrix problems are in
the form of systems of linear equations, although other matrix
problems, such as eigenvalue calculations, can occur too. A traditional
approach to solving large sparse matrix equations is to use direct
methods. This approach is often preferred in industry because
direct solvers are robust and effective for moderate size
problems. However, the unprecedented pace of the advance in
technology has led to a dramatic growth in the size of the matrices to
be handled. For example, the storage requirement for
three-dimensional simulations makes direct methods prohibitively
expensive. Iterative techniques are the only viable alternative.
Unfortunately, iterative methods lack the
robustness of direct methods. They often fail when the matrix is
very ill-conditioned. While a "bullet-proof" iterative method may
not exist, effective and robust sparse matrix iterative solvers are
becoming a vital part of large-scale scientific and industrial
applications. In the past decade or so, some emphasis has been
devoted to exploring more "powerful" iterative solvers. The
performance of these methods is eventually related to the condition
number of the coefficient matrix of the system. Many of the large
linear systems arising in industry still challenge most linear
equations solvers available.
Constructing a
preconditioner to improve the condition number of a matrix was proposed
a few decades ago. These techniques did not have much impact
initially due to the simplicity of their heuristics and the relatively
small size of the matrices to be solved. However, more and more
computational experience indicates that a good preconditioner holds the
key to an effective iterative solver. The big impact of these
simple techniques on the performance of an iterative method have
attracted increased attentions in recent years. Parallel
computers also generate many new research topics in the study of
preconditioning. Many new promising techniques have been
reported.
However, the
theoretical basis for high performance preconditioners is still not
well understood; many existing techniques still suffer from lack of
robustness. Promising ideas need still to be tested in real
applications.
Of course, the
issue of preconditioning does not arise only in the solution of linear
equations. For example, preconditioning techniques are equally
important in the use of the Jacobi-Davidson method for solving
eigenvalue problems.
This is the
motivation for holding this conference specifically dedicated to the
issues in preconditioning for large-scale scientific and industrial
applications. The conference will bring the researchers and
application scientists in this field together to discuss the latest
developments, progress made, and to exchange findings and explore
possible new directions.
This year, the conference will highlight two important themes
related to the full exploitation of petaflop-scale
computers for solving large sparse matrix problems. The
first of these themes is 'multi-core programming' and the second is 'graph and mesh partitioning'.
|