CHEBYSHEV

Usage:

-ksp_type chebyshev

Options:

-ksp_chebychev_eigenvalues <emin,emax>

set approximations to the smallest and largest eigenvalues of the preconditioned operator. If these are accurate you will get much faster convergence.

Notes: The Chebychev method requires both the matrix and preconditioner to be symmetric positive (semi) definiteBenefits: The Chebyshev Iteration recursively determines polynomials with coefficients chosen to minimize the norm of the residual in a min-max sense. The coefficient matrix must be positive definite and knowledge of the extremal eigenvalues is required. This method has the advantage of requiring no inner products.

For solving nonsymmetric problems

Chebyshev iteration avoids the computation of inner products as is necessary for the other nonstationary methods. The price one pays for avoiding inner products is that the method requires enough knowledge about the spectrum of the coefficient matrix A that an ellipse enveloping the spectrum can be identified. Chebyshev iteration is suitable for any nonsymmetric linear system for which the enveloping ellipse does not include the origin.

Chebyshev iteration is similar to the conjugate gradient method except that no inner products are computed.

The Chebyshev method has the advantage over the generalized minimal residual method (GMRES) that only short recurrences are used. On the other hand, GMRES is guaranteed to generate the smallest residual over the current search space. The biconjugate gradient method (BCG), which also uses short recurrences, does not minimize the residual in a suitable norm. However, unlike Chebyshev iteration, they do not require estimation of parameters (the spectrum of A). Finally, GMRES and BCG may be more effective in practice, because of superlinear convergence behavior, which cannot be expected for Chebyshev iteration.

For symmetric positive definite systems the ellipse enveloping the spectrum degenerates to the interval [lambda_(min),lambda_(max)] on the positive x-axis, where [lambda_(min) and lambda_(max)] are the smallest and largest eigenvalues of M^(-1)A, where M is a preconditioner. In circumstances where the computation of inner products is a bottleneck, it may be advantageous to start with the conjugate gradient method, compute estimates of the extremal eigenvalues from the CG coefficients, and then after sufficient convergence of these approximations switch to Chebyshev iteration. A similar strategy may be adopted for a switch from GMRES or BCG methods to Chebyshev iteration.

Back to Main Page