Abstracts
Invited speakers
Solving Ill-Posed Cauchy Problems using Rational Krylov Methods
Lars Eldén (Linköping, Sweden)
Let
be
a connected domain in R2 with smooth boundary
,
and assume that L is a linear, self-adjoint, and positive definite
elliptic operator defined in
. We
will consider the ill-posed Cauchy problem,

The problem is to determine the
values of u on the upper boundary, f(x,y) = u(x,y,z1),
(x,y)
.
This is an ill-posed problem in the sense that the solution (if it exists), does not depend continuously on the data.
Since the domain is cylindrical with respect to z, we can use a separation of variables approach, and write the solution formally as u(x,y,z)=cosh(L1/2 z)g. Due to the fact that L is unbounded, the operator L1/2 is unbounded, and, in the computation of cosh(L1/2 z)g, any data errors or rounding errors would be blown up, leading to a meaningless approximation of the solution.
We will describe a method for approximating u(x,y,z)=cosh(L1/2 z)g based on projection onto a subspace generated by a rational Krylov sequence involving the operator L-1, perhaps shifted. In each step of the Krylov method we need to solve a standard, well-posed two-dimensional elliptic problem involving L. This means that a standard (black box) 2D elliptic solver can be used. The exponential function of the restriction of the operator L onto a low-dimensional subspace will be computed.
The convergence of the approximation of the exponential function will be discussed, and a criterion for terminating the Krylov sequence will be given. Numerical experiments show that the Krylov space method requires far less work (2D elliptic solves) than the method based on computing the eigenvalue expansion of L.
This is joint work with Valeria Simoncini, University of Bologna.
Blurred and corrupted images arise in science and engineering, as well as in everyday life. This talk focuses (pun intended) on image deblurring, which is concerned with the estimation of a sharp image from a blurred and noisy one, and which leads to large-scale regularization problems. The emphasis in the talk is on algorithmic and computational aspects, including some general aspects of computational regularization methods.
Trust Regions in Large-Scale Optimization and Regularization 
Marielba Rojas (Lyngby, Denmark)
Trust regions are used in optimization for transforming locally-convergent methods into globally-convergent ones [1,4]. We describe the trust-region globalization strategy with focus on its main computation, namely, the solution of the so-called Trust-Region Subproblem (TRS):
(1)
where H is an n x n real, symmetric matrix, g is an n-dimensional, real,
non-zero vector, and
.
We describe the TRS, its properties, and its connection with the regularization of linear and nonlinear ill-posed problems.
We discuss existing methods for the large-scale TRS and their features. We
use one of these methods (LSTRS [6,7]) to introduce the secular equation associated with problem (1) and to illustrate the issues involved in the development
of methods for the large-scale TRS, especially those intended for regularization.
We discuss the performance of several state-of-the-art techniques [2,3,5,6]
on TRS's from optimization and regularization. Finally, we present examples of
trust-region regularization applied to large problems from different applications.
References
- A. R. CONN, N. I. M. GOULD, AND P. L. TOINT. Trust-Region Methods, SIAM, Philadelphia, 2000.
- N.I.M. GOULD, S. LUCIDI, M. ROMA AND P.L. TOINT, Solving the trust-region subproblem using the Lanczos method, SIAM J. Optim., 9(2):504525, 1999.
- W. W. HAGER, Minimizing a quadratic over a sphere, SIAM J. Optim., 12: 188208, 2001.
- J. NOCEDAL AND S. J. WRIGHT, Numerical Optimization, Springer-Verlag, New York, 2nd. ed., 2006.
- F. RENDL AND H. WOLKOWICZ, A Semidenite Framework for Trust Region Subproblems with Applications to Large Scale Minimization, Math. Prog., 77(2):273299, 1997.
- M. ROIAS, S. A. SANTOS and D. C. SORENSEN, A New Matrix-Free Algorithm for the Large-Scale Trust-Region Subproblem, SIAM J. Optim., 11(3): 611646, 2000.
- M. ROJAS, S. A. SANTOS and D. C. SORENSEN, Algorithm 873: LSTRS: MATLAB Software for Large-Scale Trust-Region Subproblems and Regularization, ACM Trans. Math. Software, 34(2):11, 2008.
Cascadic Multilevel Methods for Large-Scale Ill-Posed problems 
Fiorella Sgallari(Bologna, Italy)
Multilevel methods are popular for the solution of well-posed problems, such as certain boundary value problems for partial differential equations and Fredholm integral equations of the second kind. However, little is known about the behavior of multilevel methods when applied to the solution of linear ill-posed problems, such as Fredholm integral equations of the first kind, with a right-hand side that is contaminated by error. Cascadic multilevel methods are particularly efficient schemes based on truncated iteration that blend linear algebra and Partial Differential Equations techniques. We discuss their propertities and the applications to image deblurring and denoising that are large-scale problems, because of the typically large number of pixels that make up an image.
Joint work with Serena Morigi, Lothar Reichel, Andriy Shyshkow.
Speakers
Using non-Galerkin coarse grid operators in multigrid methods:
General considerations and case studies for circulant matrices 
Matthias Bolten (Jülich, Germany)
The use of the Galerkin operator in Multigrid methods is
known at least since the work of Nicolaides [4]. It has
been further analyzed by Hemker in [2] and is the usual
choice in algebraic multigrid methods, as it induces some useful properties
of the range
and
the nullspace
of
the coarse grid correction operator T,
the A-orthogonal projector defined by
![]()
and the prolongation, namely
and
.
This implies that
is
A-orthogonal to
,a fact that was used in the work of Mandel [5] and by
Ruge and Stüben [3].
Using the Galerkin operator generally induces a growing operator complexity,
i.e. the number of non-zero entries per unkown in the coefficient matrix
is growing. While this might not be a problem in all cases, it can slow
down the overall method significantly. The experiences from pure geometric
multigrid methods, where the coarse grid operator is not formed algebraically
but by rediscretization of the domain, suggest that this is not necessary.
So stencil collapsing techniques have been introduced, c.f. the paper
of Ashby and Falgout [1], e.g. These technqiues are not
based on algebraic properties of the underlying system but rather based
on properties of the continuous problems.
Motivated by this observation we analyzed coarse grid operators that replace the Galerkin operator. As the modified coarse grid correction is no longer a projector, the convergence results for V-cycle multigrid methods do not carry over to that case. We derived algebraic sufficient conditions for these operators in terms of the Galerkin operator, that allow for a similar result and thus are a building block for optimally convergent V-cycle multigrid methods.
Based on these results we defined some conditions which imply the sufficient conditions in order for these result to hold and special collapsing techniques for certain banded circulant matrices. This results in a multigrid method that is an extension of the multigrid method for circulant matrices by Serra Cappizano and Tablino-Possio from [6].
In the talk the theoretical result as well as some numerical results will be presented.
References
- S. F. ASHBY AND R. D. FALGOUT, A parallel multigrid preconditioned conjugate gradient algorithm for groundwater flow simulations, Nucl. Sci. Eng., 124:145159, 1996.
- P. W. HEMKER, A note on defect correction processes with an approximate inverse of deficient rank, J. Comp. Appl. Math., 8(2):137–139, 1982.
- J. MANDEL, Algebraic study of multigrid methods, Appl. Math. Comput., 25:3956, 1988.
- R. A. NICOLAIDES, On the l2 convergence of an algorithm for solving finite element equations, Math. Comp., 31(140):892906, October 1977.
- J.W. RUGE AND K. STÜBEN, Algebraic multigrid, In S. F. McCormick, editor, Multigrid methods, volume 3 of Frontiers Appl. Math., pages 73130. SIAM, Philadelphia, 1987.
- S. SERRA CAPIZZANO AND C. TABLINO-POSSIO, Multigrid methods for multilevel circulant matrices, SIAM J. Sci. Comput., 26(1):5585, 2004.
Structured eigenvalue problems
and passivity of descriptor systems
Tobias Brüll (Berlin, Germany)
In this talk we will investigate connections between the dissipativity of linear descriptor systems and an even eigenvalue problem (in the continuous-time case) or a palindromic eigenvalue problem (in the discrete-time case). The matrices of the even or palindromic eigenvalue problem are well-known from the linear-quadratic optimal-control problem, which is closely related to the problem of dissipativity.
Subspace Regularization for Large Linear Discrete Ill-Posed Problems
Angelika Bunse-Gerstner (Bremen, Germany)
For linear discrete ill-posed problems, we consider a splitting of the solution space into two subspaces, one of which contains the high frequency components of the solution. The splitting is used in a modified version of the two-level preconditioned LSQR- method such that on the high-frequency subspace we compute a solution with a regularization method and this resulting partial solution is then used to compute the complete solution in the two-level method.
If we consider Tikhonov regularisation for the high-frequency part of the problem, this approach amounts to a specific preconditioned two-level method to solve the subspace regularization problem, proposed before by several authors for situations in which prior information on the solution is available.
Our approach can be advantageous if the solution of the original problem is non smooth. We could then
for instance obtain the splitting from approximations of the eigenvectors computed by a restarted block-Lanczos method or similar methods.
This is joint work with Valia Guerra, Instituto de Cibernetica, Matematica y Fisica, Havana, Cuba.
On stabilized discretization and GMRES convergence for a convection-diffusion
model problem 
Jurjen Duintjer Tebbens (Prague, Czech Republic)
We consider a convection-diffusion model problem discretized with SUPG-stabilization to suppress nonphysical oscillations which has been studied before by many authors, among others Fischer, Ramage, Silvester, Wathen, Ernst, Eiermann, Liesen and Strakos. In the 1999 paper by Fischer, Ramage, Silvester and Wathen it was observed that optimal SUPG-stabilization yields linear systems that are solved with GMRES more rapidly than systems arising with weaker or stronger stabilization. In this talk we present a GMRES convergence analysis that explains this phenomenon.
This is joint work with Jörg Liesen, TU Berlin and Zdeneĕk Strakoš, Academy of Sciences of the Czech Republic. This work is supported by project numbers KJB100300703 and 1ET400300415 of the Grant Agency of the Academy of Sciences of the Czech Republic.
Rational Krylov Methods for the Hamiltonian Eigenvalue Problem 
Cedric Effenberger (Chemnitz, Germany)
The Rational Krylov Method by Ruhe is an extension of shift-and-invert Krylov subspace methods for general matrix eigenproblems which allows changing the shift on the fly without the need for restarting the process. However, its applicability in the case of Hamiltonian matrices is limited by the fact that standard shifting techniques destroy their special structure. The talk deals with modifications of the Rational Krylov Algorithm that are specifically tailored to the Hamiltonian spectral symmetry causing it to be preserved throughout the numerical solution.
This is joint work with Peter Benner, TU Chemnitz, Germany.
Overlapping Partitioning and
Applications to Preconditioning
David Fritzsche (Wuppertal Germany)
Additive and multiplicative Schwarz preconditioners are known to be an efficient class of preconditioners for problems involving a geometric domain. They are based on decomposing the domain into a set of overlapping subdomains, in such a way that the restrictions of the original problem to the parallel subdomains are easily solvable. Algebraic Schwarz preconditioners work without an underlying geometric domain by restricting the linear operator to overlapping subsets of the coordinates. A good choice of these overlapping subsets is essential for the usefulness of the preconditioner. The graph formulation of the problem is to find an overlapping partitioning of the nodes of the graph of the linear operator.
The algorithm presented to find overlapping partitionings is based on the idea of starting with finding a non-overlapping partitioning. For this task XPABLO (eXtended PArametrized Block Ordering) is used. Note that the non-overlapping partitionings found by XPABLO can be used to construct block Jacobi and block Gauss-Seidel preconditioners. Each subgraph in this partitioning is then grown to include nodes overlapping with other subgraphs. The algorithm uses structural and numerical properties of the underlying matrix to find the nodes to be added to each subgraph.
Numerical experiments show how these algebraic Schwarz preconditioners perform and how their performance compare to and improve on the performance of XPABLO-based block diagonal and block triangular preconditioners.
IDR explained 
Martin H. Gutknecht (Zurich, Switzerland)
Under the section heading "A preconditioned Lanczos type method" the Induced Dimension Reduction (IDR) method was introduced on three and a half pages of a 1980 proceedings paper [5] by Wesseling and Sonneveld.Few people may have noticed it. Soon after IDR, Sonneveld must have found his widely applied Conjugate Gradient Squared (CGS) algorithm, published in SISSC in 1989 [2], but submitted to that journal on April 24, 1984,already. Then, at the Householder Symposium 1990 in Tylosand, Van der Vorst suggested with CGSTAB an improvement of both these methods. It was published as the Bi-CGSTAB method in SISC in 1992 [3].
Bi-CGSTAB has become one of the methods of choice for nonsymmetric linear systems, and it has been generalized in various ways in the hope of further improving its reliability and speed. Among these generalizations there is the ML(k)BiCGSTAB method of Yeung and Chan, published in SISC in 1999 [6], which in the framework of block Lanczos methods can be understood as a variation of Bi-CGSTAB with right-hand side block size 1 and left-hand side block size K. Probably only few readers studied this paper completely, since the algorithm turned out to be rather complicated.
Last year Sonneveld and van Gijzen [3] reconsidered IDR and generalized it to IDR(s), claiming that IDR is equally fast but preferable to Bi-CGSTAB, and that IDR(s) may be much faster than IDR = IDR(1). It turned out that IDR(s) is closely related to ML(s)BiCGSTAB, but at no step mathematically equivalent. In fact, the recurrences of IDR and IDR(s) are simpler than those of Bi-CGSTAB and ML(s)BiCGSTAB, respectively, and they offer some exibility that can be capitalized upon for increasing the stability. A similar exibility is also available in the block Lanczos process but has not been made use of in ML(k)BiCGSTAB.
In this talk we first try to explain the basic, seemingly quite general IDR approach, which differs completely from traditional approaches to Krylov space methods. Then we compare the basic properties of the above mentioned methods and discuss some of their connections. Most of what we say is taken from [3] and [1], but we also provide some further explanations.
References
- G. SLEIJPEN, P. SONNENVELD, AND M. B. V. GIJZEN, Bi-CGSTAB as an induced dimension reduction method, Report 08-07, Department of Applied Mathematical Analysis, Delft University of Technology.
- P. SONNENVELD, CGS, a fast Lanczos-type solver for nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 10 (1989), pp. 36{52. Received Apr. 24, 1984.
- P. SONNENVELD AND M. B. V. GIJZEN, IDR(s): a family of simpl and fast algorithms for solving large nonsymmetric systems of linear equations, SIAM J. Sci. Comput. Submitted Mar. 20, 2007.
- H. A. VAN DERr VORST, Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 13 (1992), pp. 631644. Received May 21, 1990.
- P. WESSELING AND P. SONNENVELD, Numerical experiments with a multiple grid and a preconditioned Lanczos type method, in Approximation Methods for Navier-Stokes Problems, R. Rautmann, ed., vol. 771 of Lecture Notes in Mathematics, Berlin, Heidelberg, New York, 1980, Springer Verlag, pp. 543562.
- M.-C. YEUNG AND T. F. CHAN, ML(k)BiCGSTAB: a BiCGSTAB variant based on multiple Lanczos starting vectors, SIAM J. Sci. Comput., 21 (1999), pp. 12631290. Received May 16, 1997, electr. publ. Dec. 15, 1999.
Verified Numerical Computations for Ill-Posed Optimization Problems 
Viktor Härter (Hamburg, Germany)
Recently, Ordóñez and Freund [3, 6] have shown
that 71% of the problems in the NETLIB Linear Programming Library [1]
and about 30% of the benchmark problems in the SDPLIB (Semidefinite Programming
Library, [2]) are ill-posed. Both libraries are collections of many real-world
optimization problems with up to thousands of constraints and millions
of variables. In this talk, after a short introduction into condition
numbers for optimization problems, I will discuss a verification method
for conic programming, that allows to compute rigorous error bounds for
the optimal value under mild conditions, even for very ill-conditioned
or ill-posed problems. Moreover, the method allows to prove strong duality
and to compute rigorous error bounds for
optimal
solutions as well as rigorous bounds for condition numbers. Numerical
results of the software packages Lurupa [5] and VSDP [4] are presented.
These packages implement the error bounds for linear programming problems
and for semidefinite programming problems, respectively.
This is joint work with Christian Jansson (TU Hamburg-Harburg).
References
- NETLIB LP Test Problems. http://www.netlib.org/lp/.
- B. BORCHERS, SDPLIB1.2 : A collection of semidefinite programing test problems. Technical report, Socorro, NM, USA, June 28, 1998.
- FERNANDO ORDÓÑEZ AND ROBERT M. FREUND, Computational experience and the explanatory value of condition measures for linear optimization. SIAM J. on Optimization, 14(2):307333, 2003.
- C. JANSSON, Vsdp: Verified semidefinite programming. Technical report, TU Hamburg-Harburg, December 18, 2006.
- C. KEIL, Lurupa - rigorous error bounds in linear programming. In B. Buchberger, S. Oishi, M. Plum, and S. M. Rump, editors, Algebraic and Numerical Algorithms and Computer-assisted Proofs, number 05391 in Dagstuhl Seminar Proceedings, 2006.
- ROBERT M. FREUND, FERNANDO ORDÓÑEZ, KIM-CHUAN TOH, Behavioral measures and their correlation with ipm iteration counts on semi-definite programming. Mathematical Programming, Volume 109(2-3):445475, 2007.
Golub-Kahan bidiagonalization and revealing the noise level in data 
Iveta Hnetynkova (Prague, Czech Republic)
Regularization techniques based on Golub-Kahan bidiagonalization belong among popular approaches for solving large ill-posed problems. First, the original problem is projected onto a lower dimensional subspace using the bidiagonalization algorithm, which by itself represents a form of regularization by projection. The projected problem however inherits a part of the ill-posedness, and therefore some form of inner regularization must be applied. The outer bidiagonalization process is then typically stopped using stopping criteria based on the regularization of the projected problem.
In this contribution we show how the noise in the data enters the projected problem obtained by the bidiagonalization, and how this information can be used in revealing the unknown level of the noise presented in the data. Such information can be useful, in combination with that from regularization of the projected problem, for construction of an efficient stopping criteria in solving large scale ill-posed problems.
Based on joint work with M. Plešinger and Z. Strakoš, Institute of Computer Science, Academy of Sciences, Czech Republic.
Convergence of the exponential Euler
iteration for nonlinear ill-posed problems
Marlis Hochbruck (Düsseldorf, Germany)
In this talk we discuss the regularization properties of the exponential Euler method applied to the Showalter equation
. In particular, we will show that the method converges to a solution of
in the
.
The analysis is done for variable step sizes determined by the discrepancy principle. Under suitable assumptions on the initial error, we will also prove that the convergence rate is optimal.
Our analysis is motivated by the work of Tautenhahn, who proved these results for the exact solution of the Showalter equation and on recent work on error bounds for exponential integrators for parabolic partial differential equations.
This is joint work with Michael Hönig and Alexander Ostermann.
Regularization of nonlinear ill-posed problems by
the exponential Euler method
Michael Hönig (Düsseldorf, Germany)
In this talk we discuss the numerical realization of asymptotic
regularization (Showalter's method) of inverse problems. Given a nonlinear inverse problem
,
the key idea of asymptotic regularization is that the solution
of the evolution equation
yields a stable approximation to the solution of the inverse problem for large
t. Application of standard integration schemes yield well known regularization methods. For example, the explicit Euler method and the linearly implicit Euler method are equivalent to Landweber and Levenberg-Marquardt regularization, respectively. We propose to apply the exponential Euler method for solving the evolution equation. The main computational work for this method is spent on the approximation of a product of a matrix function and vector using Krylov subspace methods. We will discuss some practical aspects such as suitable stopping criteria. Furthermore, we present numerical experiments which show the competitiveness of this regularization scheme.
This is joint work with Marlis Hochbruck and Alexander Ostermann.
Linear equations
in quaternionic variables
Drahoslava Janovská (Prague, Czech Republic)
We study quaternionic linear equations of type
with quaternionic constants bj,
cj, e and
arbitrary positive integer m.
For m=2 the resulting equation is called Sylvester's equation.
For this case a complete solution (solution formula, determination of null space) will be given.
For the general case we show that the solution can be found by a corresponding matrix equation of a particular simple form. (Coauthor: Prof. Gerhard Opfer, University Hamburg)
Computation of pseudospectra 
Vladimír Janovský (Prague, Czech Republic)
The aim is to contribute to pseudospectra computation via
a path following technique. Given a matrix
,
we compute the branch consisting of a fixed singular value
and
corresponding left and right singular vectors of parameter dependent matrix
(x+iy)I - A.
The fact that the branch correspond to the smallest singular value
((x+iy)
I - A)
=
is
sufficient to verify at just one point of the branch due to continuity argument.
We can exploit standard ready-made software. (Coauthor: Janovska D., Tanabe K.)
A projection method for the eigenvalues of a delay-differential equation
Elias Jarlebring (Braunschweig, Germany)
The eigenvalues of a delay-differential equation (DDE) are the solutions of the nonlinear eigenvalue problem
![]()
where
and
is the delay. Recent methods to numerically nd solutions of the characteristic equation are based on different types of
discretizations of the delay-differential equation. Even though these method
work well for small DDEs, they are usually ineffcient for large DDEs and a fine desired accuracy.
Projection methods based on some form of subspace acceleration usually exploit that a small projected problem can be solved efficiently. Several projection methods have been suggested for the nonlinear eigenvalue problem.
Wefind eigenvalues of a DDE of dimension n = 106 stemming from the discretization of a partial differential equation with delay by applying a subspace acceleration of the residual inverse iteration. The numerical properties, such as effciency and accuracy are compared to the numerical properties of other methods in the literature.
Efficient Determination of the Hyperparameter via L-curve in Large Scale
Least Squares
and Total Least Squares Problems 
Jörg Lampe (Hamburg University of Technology, Germany)
Setting up the L-curve in Least Squares is easy if the singular value decomposition of A is present. We are interested in large scale problems where computing the svd is too expensive. Values on the L-curve are obtained by solving a Regularized Least Squares (RLS) or Regularized Total Least Squares (RTLS) problem each time.
Within the sequence of RLS or RTLS problems it is favorable to reuse information from previously solved problems. We recommend the Nonlinear Arnoldi method with the use of thick starts.
This is joint work with Heinrich Voss.
Algebraic Multilevel ILU-Based Preconditioning for Stationary
Flow Problems 
Jan Mayer (Karlsruhe, Germany)
We present a new numerical scheme for the solution of the stationary incompressible Navier-Stokes equations. A key ingredient of the proposed approach relies on an algebraic multilevel ILU-based preconditioner which is used in order to solve the linear system obtained by linearization of the Navier-Stokes equations. The overall solution process relies on a fully coupled implicit scheme. To validate the proposed approach we consider two stationary flow problems: a lid driven cavity problem in 3D and a 2-dimensional thermal driven cavity flow based on a Low-Mach number model. It turns out that a straightforward threshold-based incomplete LU-factorization yields excellent preconditioning properties for lid driven cavity problem whereas the thermal driven cavity problem required dedicated preprocessing of the coefficient matrix prior to factorization. Beside the numerical results, a description of the used software is proposed: HiFlow for the discretization of the Navier-Stokes equations and ILU++ for solution of the associated linear systems.
This is joint work with Hendryk Bockelmann and Vincent Heuveline.
Interior-point method for nonlinear
nonconvex optimization 
Ctirad Matonoha (Prague, Czech Republic)
We propose an algorithm for solving nonlinear nonconvex programming problems, which is based on the interior point approach. We split inequality constraints into active and inactive to overcome problems with stability. Inactive constraints are eliminated directly while active constraints are used to define symmetric indefinite linear system. Regularization is used to eliminate a near singularity caused by a near linear dependence of gradients of active constraints. Inexact solution of the system is obtained iteratively using indefinitely preconditioned conjugate gradient method. Joint work with Ladislav Luksan and Jan Vlcek.
Hamiltonian Jacobi methods - sweeping away convergence
problems 
Christian Mehl (Birmingham, United Kingdom)
Jacobi's method for the diagonalization of a symmetric matrix is a famous, successful, and easy to implement algorithm for the computation of eigenvalues of symmetric matrices. No wonder that this algorithm has been generalized or adapted to many other classes of eigenvalue problems. In particular, Hamiltonian Jacobi methods for the Hamiltonian eigenvalue problem have been investigated by Byers (generalizing a nonsymmetric Jacobi method proposed by Stewart) and Bunse-Gerstner/Faßbender (generalizing a nonsymmetric Jacobi method proposed by Greenstadt and Eberlein).
Surprisingly, the Hamiltonian Jacobi method proposed by Bunse-Gerstner/Faßbender only shows linear asymptotic convergence despite the fact that asymptotic quadratic convergence can be observed by the Greenstadt/Eberlein Jacobi method. In this talk, we give an explanation for this unexpected behavior through a detailed investigation of sweep patterns. We show that for nonsymmetric Jacobi algorithms, sweep patterns that lead to convergence of the algorithm need not automatically imply quadratic asymptotic convergence - a phenomenon emphasizing a fundamental difference in the behavior of symmetric and nonsymmetric Jacobi methods. Then we identify sweep patterns that guarantee asymptotic quadratic convergence for nonsymmetric Jacobi methods and show how the sweep pattern of the algorithm by Bunse-Gerstner/Faßbender has to be modified in order to ensure the desired convergence behavior.
Finally, we propose a slightly different Hamiltonian Jacobi method, investigate its convergence behavior, and discuss it advantages and disadvantages over the method proposed by Bunse-Gerstner/Faßbender.
Multilevel Krylov Methods 
Reinhard Nabben (Berlin, Germany)
Here we develop a new type of multilevel methods, called Multilevel Krylov methods (MK methods), to solve linear systems of equations. The basic idea of this type of methods is to shift small eigenvalues that are responsible for slow convergence to an a priori fixed constant. The shifting of the eigenvalues is similar to projection type methods and uses the solution of subspace or coarse level systems. Numerical results show that these MK methods work very well for convection diffusion equations and the Helmholtz equation. The convergence can be made almost independent of grid size h and also only mildly dependent of the wavenumber k for the Helmholtz equation. We also combine the multilevel Krylov idea with the algebraic way of choosing the coarse-grid system and the restriction and prolongation operators. The resulting method is called algebraic multilevel Krylov method or AMK method. We present different AMK methods that differ in the choice of the algebraic components. First, we use the classical Ruge and Stüben approach. Then, we present a agglomeration-based technique. Both methods are tested for various matrices arising from discretization of 2D diffusion and convection-diffusion equation as well as for several matrices taken from the matrix-market collections. The numerical results show that the AMK methods work as well as the geometric MK methods. For the convection-diffusion equations the AMK methods lead to better convergence rates than the original AMG method by Ruge and Stüben.
The Chi-squared Distribution of the Regularized Least
Squares Functional for Regularization Parameter
Estimation 
Rosemary Renaut (Tempe, USA)
Recently, Mead showed that a statistical result on the
distribution
of the Tikhonov regularized least squares (RLS) cost functional can be
used for estimating an optimal regularizing parameter. A Newton algorithm
based on the Generalized Singular Value Decomposition (GSVD) gives a convergent
algorithm for the parameter and RLS solution, but is not suitable for
large scale problems. By converting the regularization to standard form,
we can use the bidiagonalization of the system matrix and obtain a cost
efficient method of updating the solution for regularization parameter
estimates. The Newton method can thus be implemented with very little
overhead as compared to one single parameter solve of the regularized
problem. Typically no more than seven updates are required because of
the monotonicity of the underlying functional. The use of the
property of the functional, therefore leads to a very effective means
for finding a suitable regularization parameter. The numerical results
using this parameter estimate compare very favorably with the Unbiased
Predictive Risk Estimator for the regularization parameter. The method
does rely on knowledge of covariance information on the noise distribution
of the measured data, but even with just some common covariance information
(white noise) the method is still robust.
Structure preserving treatment of PCP-palindromic eigenvalue problems
Christian Schröder (Berlin, Germany)
In this talk we consider polynomial eigenvalue problems of the form
with
given and
wanted. The matrix polynomial
is said to be a
PCP-palindromic polynomial if there exists an involutory matrix
such that
"PCP" stands for "P-conjugate-P".
PCP-palindromic eigenvalue problems have the property that if
is an eigenvalue then also
is an eigenvalue. In the talk we discuss numerical methods to compute a linearization of the
matrix polynomial and a Schur form of the linearization. Both, the linearization and the Schur
form, inherit the PCP-property from the original matrix polynomial. Thus the eigenvalue pairing
is also preserved, even in finite precision arithmetic. It will be supported by numerical examples
that the structured approach is also faster.
An Extension of Ostrowki's Two-Sided RQI to Nonlinear Eigenvalue Problems 
Hubert Schwetlick (Dresden, Germany)
By using some new existence and approximation results about two-sided nonlinear Rayleigh functionals, Ostrowski's two-sided RQI introduced for linear eigenvalue problems is extended to nonlinear ones. It is shown that the nonlinear version converges cubically as the linear version does.
On an unsymmetric eigenvalue problem governing free vibrations of fluid-solid structures 
Markus Stammberger (Hamburg, Germany)
In this talk we discuss properties and adapted solvers for the unsymmetric eigenproblem

governing free vibrations of fluid-solid structures. Here, Ks, Kf, Ms and Mf are symmetric positive definite stiffness and mass matrices. Despite the unsymmetry arising from the coupling matrix C a Rayleigh functional and a min-max-characterization can be given. Consequently we study adapted versions of the Rayleigh quotient iteration and Jacobi-Davidson method.
This is joint work with Heinrich Voß.
Moments, Krylov subspace methods and model reduction 
Zdeněk Strakoš (Prague, Czech Republic)
Reduced order modeling plays a central role in approximation of largescale dynamical systems, and techniques based on Krylov subspaces are used in that area for decades, see, e.g. the recent monograph [1] and the very nice survey paper [2], where one can find references to substantial work of many authors. In [1, Chapter 10, p. 314], approximation of the linear systems by moment matching is described as one of the three uses of Krylov subspace methods, in addition to iterative solution of Ax = b and approximation of the eigenvalues of A, and it is treated in detail in Chapter 11 of that book.
We find useful to point out that Krylov subspace methods can not only be viewed as tools for model reduction. Many of them by their nature represent model reductions based on matching moments. Such view naturally complements, to our opinion, the description using the projection processes framework, and it reveals in another way the nonlinear character of Krylov subspace methods. With no exaggeration, progress in development and, in particular, in the analysis and understanding of Krylov subspace methods in the field of numerical linear algebra has only been possible throug exploiting various links with problems in several other disciplines of mathematics. Among them, the problem of moments, its theory and methods for its solution contributed in a fundamental way.
In order to explain the deep link between matching moments and Krylov subspace methods, we start with a modification of the classical Stieltjes moment problem, and show that its solution is given:
- in the language of orthogonal polynomials by the Gauss-Christoffel quadrature;
- in the matrix form by the conjugate gradient method (see [4])
In order to allow straightforward generalizations, we use the Vorobyev vector moment problem, and present some basic Krylov subspace iterations as moment matching model reduction. Our exposition is different from the existing model reduction literature known to us by derivation of the matching moments model reduction directly from the Vorobyev vector moment problem.
References
- A. C. ANTOULAS, Approximation of large-scale dynamical systems, vol. 6 of Advances in Design and Control, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2005.
- Z. BAI, Krylov subspace techniques for reduced-order modeling of largescale dynamical systems, Applied Num. Math., 43 (2002), pp. 944.
- G. MEURANT AND Z. STRAKOŠ, The Lanczos and conjugate gradient algorithms in finite precision arithmetic, Acta Numerica, 15 (2006), pp. 471542.
- Z. STRAKOŠ, Model reduction using the Vorobyev moment problem, submitted to Numerical Algorithms, June 2008.
On best approximations of matrix polynomials 
Petr Tichý (Prague, Czech Republic)
In this talk we will discuss matrix approximation problems of the form
![]()
where
and
are
given matrices such that B can be written as a polynomial in A,
is
the set of polynomials of degree at most m, and
is
the matrix 2-norm (spectral norm). For example, if f is a function
that is analytic in a neighbourhood of the spectrum of a given matrix A,
then B
f(A) can
be written as a polynomial in A. Examples of such problems
are the ideal GMRES and ideal Arnoldi approximation problems introduced
by Greenbaum and Trefethen [SISC 15, 359-368, 1994], for which B
I and
B
Am+1.
We will present results on uniqueness of the solution of such
kind of matrix approximation problems.
This is joint work with Jörg Liesen (TU-Berlin).
A glued matrix can be obtained from a direct sum of p copies
of an unreduced symmetric tridiagonal matrix T by modifying the
junctions by a glue
,
in one of two ways, so that the new tridiagonal matrix has no zero off-diagonal entries.
Despite being unreduced, a glued matrix can have some eigenvalues agreeing to hundreds of decimal places. This makes glued matrices practically useful as test matrices for tridiagonal eigensolvers such as inverse iteration and the MRRR algorithm.
However, the eigenvalue distribution of a glued matrix is a fascinating
subject of theoretical interest in its own right. By means of secular equations,
we study how width and placement of the eigenvalue clusters of a glued
matrix depend on T, on p, and on
.
Interlacing properties and the question of eigenvalue repetition between T and a glued matrix are also investigated.
