Program
Monday, March 26
9:00 – 12:00
Arrival
12:00 – 13:30
Lunch
13:30 – 14:00
Registration and Introduction YRM
14:00 – 14:30
Polynomial chaos: applications in electrical engineering and bounds
The study of electromagnetic fields in 2D circuits often leads to resonances. We use a polynomial chaos expansion (due to uncertain circuit parameters), which is analytically and numerically troublesome near the resonance frequencies. As a toy model for the convergence of the polynomial chaos expansion, we look at the parallel RLC circuit with uncertain capacitance and give $L^2$ error bounds depending on the degree of the expansion, the random distribution, the distance to resonance and the so-called quality factor of the circuit (which is a measure for the damping).
14:30 – 15:00
Use of single precision in climate models
15:00 – 15:30
Simulation of marine ecosystem models with coarser time steps and different intial values
For the investigation of carbon uptake and storage of earth's ocean, simulations of marine ecosystems model are important. The computation of steady annual cycles of a three-dimensional marine ecosystem model is part of this simulation of marine biogeochemistry as well as part of optimizations of model parameters for biogeochemical models (e.g. parameter identification are ususally done by an optimization algorithm). In this process the computation of one steady annual cycle takes up to 10000 model years with about 3000 time steps per model year. This simulations and optimizations are coupled simulations of the ocean circulation and the marine biogeochemistry. We used an offline simulation with pre-computed ocean transport based on the transport matrix approach. The initial values of the biogeochemical model are set to global mean concentrations for every simulation.
To accelerate this optimization or simulation process we investigated the influence of coarser time steps on the steady annual cycles. The coarser time steps reduced the computional effort for every model year.
We would like to present the numerical results using different coaser time steps for six biogeochemical models. Furthermore, we show numerical results using various initial values generating with different distributions.
To accelerate this optimization or simulation process we investigated the influence of coarser time steps on the steady annual cycles. The coarser time steps reduced the computional effort for every model year.
We would like to present the numerical results using different coaser time steps for six biogeochemical models. Furthermore, we show numerical results using various initial values generating with different distributions.
15:30 – 16:00
Coffee break
16:00 – 16:30
A Parallel In Time algorithm for Shallow Water Equations
In the framework of weather forecasting and climate prediction the rotational shallow water equations (RSWE) state a reasonable approach when it comes to the exhibition of the prevailing problems associated with horizontal fluid motion not only on spherical geometries. Since Cauchy problems for RSWE on spheres tend to require high spatial resolutions, and thus lead to long computation times, the implementation of the Parareal algorithm is standing to reason. Furthermore, the evolution of physical phenomena often starts to take place after a long period of time. The parallelization in time of the RSWE is a first step to approach world climate prediction models, e.g. applied in the PALMOD initiative, where a complete glacial cycle is simulated covering a timespan of 120,000 years. Hence, the first objective is to ensure a well performing Parareal algorithm in the context of a straightforward test case for the RSWE. The accomplishment of doing so will be elucidated in this talk.
16:30 – 17:00
Adjoint Complement to the Volume-of-Fluid Method
Optimal Shape Design (OSD) in the context of fluid flow exposed geometries aims at a shape that minimizes (maximizes) a given objective functional. Using a steepest descent approach, the required sensitivities are preferably obtained by adjoint methods due to the independence of the computational costs from the number of design variables. In the past, usually single-phase flows were adressed. In this case typical challenges of the adjoint approach, such as transient processes or discontinuities, do not necessarily occur. For immiscible two-phase flows, however, these issues can no longer be avoided since the flow is inherently unsteady and afflicted with discontinuities along the (sharp) free surface. The talk will focus on an adjoint formulation of the classical Volume-of-Fluid (VoF) method for force-objectives. A model problem will be defined and an analytical solution to this problem is derived which displays that the adjoint problem is ill-posed. An additional (heuristic) diffusive concentration term is introduced as a remedy to this issues. This term violates the dual consistency but strongly regularizes the solution of the adjoint equation system. Results obtained by a numerical implementation of the heuristic approach will be benchmarked against the analytical solution for the model problem. Supplementary, three-dimensional simulations for the flow around a ship hull at large Reynolds numbers are discussed. The talk closes with the presentation of further - less heuristic - options for an adjoint treatment of multi-phase flows.
17:00 – 17:30
Approximation of Hermitian Matrices by Positive (Semi-)Definite Matrices using Modified LDL* Decompositions
17:30 – 18:00
Convergence of Ginelli’s algorithm for covariant Lyapunov vectors
Covariant Lyapunov vectors (CLVs) detect directions of asymptotic growth rates to small linear perturbations of solutions in a dynamical system. They are used to analyze and describe chaotic behavior in theory and applications such as climate sciences.
During the last few years several algorithms to compute CLVs emerged. One of the most popular algorithms was developed by Ginelli. Although there is a partial convergence result for the first half of the algorithm, it is restricted to a special case and exhibits some conceptional difficulties. Our recent advances provide a complete convergence proof in a more general setting allowing even for degenerate Lyapunov spectra.
During the last few years several algorithms to compute CLVs emerged. One of the most popular algorithms was developed by Ginelli. Although there is a partial convergence result for the first half of the algorithm, it is restricted to a special case and exhibits some conceptional difficulties. Our recent advances provide a complete convergence proof in a more general setting allowing even for degenerate Lyapunov spectra.
18:00 – 19:00
Dinner
19:00
Network activities
Tuesday, March 27
9:00 – 9:30
Time-sparse discretization for parabolic optimal control with measures
We consider a parabolic optimal control problem governed by space-time measure controls. Two approaches to discretize this problem will be compared. The first approach has been considered and employs a discontinuous Galerkin method for the state discretization where controls are discretized piecewise constant in time and by Dirac measures concentrated in the finite element nodes in space. In the second approach we use variational discretization of the control problem utilizing a Petrov-Galerkin approximation of the state which induces controls that are composed of Dirac measures in space and time, i.e. variational discrete controls that are Dirac measures concentrated in finite element nodes with respect to space, and concentrated on the grid points of the time integration scheme with respect to time. The latter approach then yields maximal sparsity in space-time on the discrete level. Numerical experiments show the differences of the two approaches.
9:30 – 10:00
Development of a new OpenFOAM solver for free-surface flows
10:00 – 10:30
Coffee break
10:30 – 11:00
Evaluation
11:00 – 12:00
How to give a good talk
12:00 – 13:30
Lunch
13:30 – 14:00
Registration and Introduction CSE Workshop
14:00 – 15:00
Plenary Session:
Learning of variational models for inverse imaging problems
15:00 – 15:30
Hyperelastic Image Registration
Image registration is one of the challenging problems in image processing. Given are two images that are taken for example at different times, from different devices or perspectives. The goal is to determine a reasonable transformation, such that a transformed version of one of the images is similar to the second one.
In this talk, we give a brief introduction to this fascinating problem and present typical areas of applications. We outline a state-of-the-art mathematical model, that is based on a flexible variational setting. We discuss important features such as appropriate data fitting, regularization, and the integration of additional constraints.
A focus of the talk is on hyperelastic image registration, which is motivated by an application from positron emission tomography (PET) cardiac imaging. More specifically, we present a hyperelastic regularizer and we show that this regularizer enables the recovery of large and highly non-linear transformations. We also show that this regularization results in diffeomorphic mappings. The price to be paid is a non-convex but polyconvex objective function.
We also present a stable and efficient numerical implementation of hyperelastic registration. This implementation is based on the discretize then optimize paradigm and uses a sophisticated computation of the discrete analogues of the three invariants of the transformation tensor: lengths, areas and volumes. We show several numerical examples that illustrate the potential of the hyperelastic regularizer. We also show the mass-preserving registration of cardiac PET images, where hyperelastic regularization is mandatory.
In this talk, we give a brief introduction to this fascinating problem and present typical areas of applications. We outline a state-of-the-art mathematical model, that is based on a flexible variational setting. We discuss important features such as appropriate data fitting, regularization, and the integration of additional constraints.
A focus of the talk is on hyperelastic image registration, which is motivated by an application from positron emission tomography (PET) cardiac imaging. More specifically, we present a hyperelastic regularizer and we show that this regularizer enables the recovery of large and highly non-linear transformations. We also show that this regularization results in diffeomorphic mappings. The price to be paid is a non-convex but polyconvex objective function.
We also present a stable and efficient numerical implementation of hyperelastic registration. This implementation is based on the discretize then optimize paradigm and uses a sophisticated computation of the discrete analogues of the three invariants of the transformation tensor: lengths, areas and volumes. We show several numerical examples that illustrate the potential of the hyperelastic regularizer. We also show the mass-preserving registration of cardiac PET images, where hyperelastic regularization is mandatory.
15:30 – 16:00
Coffee break
16:00 – 16:30
Methods of Uncertainty Quantification in Marine Ecosystem Models
We present different numerical methods to investigate uncertainties in numerical models.
Background is the growing desire in climate science not only to predict future climate, but also to quantify sensitivities and uncertainties of predicted results. Characteristic features of climate models and predictions are: dependence on so-called forcing data and model parameters, nonlinearity of models, coupled, multiphysics model structure, and highly tuned model configurations. As example, we use a coupled cean-ecosystem model of low complexity in a spatially reduced form. We compare methods using ensemble and sensitivity computations and compare uncertainty w.r.t. parameters and forcing data.
Background is the growing desire in climate science not only to predict future climate, but also to quantify sensitivities and uncertainties of predicted results. Characteristic features of climate models and predictions are: dependence on so-called forcing data and model parameters, nonlinearity of models, coupled, multiphysics model structure, and highly tuned model configurations. As example, we use a coupled cean-ecosystem model of low complexity in a spatially reduced form. We compare methods using ensemble and sensitivity computations and compare uncertainty w.r.t. parameters and forcing data.
16:30 – 17:00
Saturation Rates for Filtered Back Projection Reconstructions
The term filtered back projection (FBP) refers to a well-known and commonly used reconstruction technique in computerized tomography, which allows us to recover bivariate functions from given Radon samples. The FBP formula, however, is numerically unstable and suitable low-pass filters of finite bandwidth and with a compactly supported window function are employed to make the reconstruction by FBP less sensitive to noise.
The aim of this talk is to analyse the intrinsic FBP reconstruction error which is incurred by the application of a low-pass filter. To this end, we present error estimates in Sobolev spaces of fractional order, where the obtained error bounds depend on the bandwidth of the utilized filter, on the flatness of the filter's window function at the origin, on the smoothness of the target function, and on the order of the considered Sobolev norm. Further, we prove convergence for the approximate FBP reconstruction in the treated Sobolev norms along with asymptotic convergence rates as the filter's bandwidth goes to infinity, where we observe saturation at fractional order depending on smoothness properties of the filter's window function. The theoretical results are supported by numerical experiments.
This talk is based on joint work with Armin Iske.
The aim of this talk is to analyse the intrinsic FBP reconstruction error which is incurred by the application of a low-pass filter. To this end, we present error estimates in Sobolev spaces of fractional order, where the obtained error bounds depend on the bandwidth of the utilized filter, on the flatness of the filter's window function at the origin, on the smoothness of the target function, and on the order of the considered Sobolev norm. Further, we prove convergence for the approximate FBP reconstruction in the treated Sobolev norms along with asymptotic convergence rates as the filter's bandwidth goes to infinity, where we observe saturation at fractional order depending on smoothness properties of the filter's window function. The theoretical results are supported by numerical experiments.
This talk is based on joint work with Armin Iske.
17:00 – 17:30
Clifford algebras and simplicial complexes
17:30 – 18:00
Lanczos’ Algorithm in Finite Precision and Quantum Mechanics
18:00 – 19:00
Dinner
19:00
Get together
Wednesday, March 28
9:00 – 10:00
Plenary Lecture:
Multiscale Multiphysics on Multicore Machines
We consider multiphysics problems modelled by time dependent coupled partial differential equations on neighboring domains. In a lot of applications, the different physics will exhibit different time scales. Our programming paradigm is a partitioned approach, where existing codes are used for the subproblems, which then exchange information via boundary conditions. As a leading example, we look at the thermal interaction between a fluid and a structure, also called conjugate heat transfer. Examples for applications are gas quenching in steel cooling or the simulation of rocket engines.
An optimal method is fast, time adaptive, allows different time steps for different problems and solves these problems in parallel. We describe the state of the art and where it falls short if these goals. Then, an approach that fulfils all of these properties is discussed.
An optimal method is fast, time adaptive, allows different time steps for different problems and solves these problems in parallel. We describe the state of the art and where it falls short if these goals. Then, an approach that fulfils all of these properties is discussed.
10:00 – 10:30
Fluid Dynamic Optimization of HVAC-Components with Adjoint Methods
The theory and application of continuous adjoint methods for the optimization of heating, ventilation and air-conditioning components is presented in this talk. The cost functions to be optimized are related to comfort and efficiency criteria. Shape and porosity modifications are the means of control, in a CFD-based framework. The underlying physics are the incompressible, steady state Reynolds-averaged Navier-Stokes-Fourier equations. Porosity is modeled by a Darcy term. Using the adjoint method, the sensitivity is computed from the numerical solution of the primal and adjoint equation systems. The cost of the computation is independent of the number of degrees of freedom which makes the method attractive for the application to complex industrial settings.
Of particular note are some specific problems related to the adjoint optimization. An essential aspect is the computation of the gradient from the sensitivity. The sensitivity is a directional derivative and for this reason not feasible for the gradient-based optimization strategy. Therefore, the gradient is computed from the Laplace-Beltrami equation, demanding the numerical solution of an additional partial differential equation of second order on the surface of the geometry. Subsequently, it is used for the design update using gradient descent in conjunction with a proper step size. The computation of the step size for the gradient method is a crucial point. It should unify efficiency and feasibility. But, established methods such as line-search based on the Armijo-rule are rather computationally expensive. Hence, a reduced Armijo approach is presented where part of the algorithm is outsourced to a coarse mesh, making the computation less expensive. Using this approach, the intensity of design modification per update step is increased and thus, the whole design cycle accelerated. However, design modifications are often limited by constraints essential for the industrial realization. Accordingly, constraints related to the porosity modification are investigated. The insertion of an approximated regularization term into the cost functional promotes a sparse distributed control which is in some cases preferred.
The approaches are validated by fluid dynamic test cases. The arising framework is applied to heating, ventilation and air-conditioning components. It is shown that the adjoint method is a powerful and efficient technique to optimize heating, ventilation and air-conditioning components.
Of particular note are some specific problems related to the adjoint optimization. An essential aspect is the computation of the gradient from the sensitivity. The sensitivity is a directional derivative and for this reason not feasible for the gradient-based optimization strategy. Therefore, the gradient is computed from the Laplace-Beltrami equation, demanding the numerical solution of an additional partial differential equation of second order on the surface of the geometry. Subsequently, it is used for the design update using gradient descent in conjunction with a proper step size. The computation of the step size for the gradient method is a crucial point. It should unify efficiency and feasibility. But, established methods such as line-search based on the Armijo-rule are rather computationally expensive. Hence, a reduced Armijo approach is presented where part of the algorithm is outsourced to a coarse mesh, making the computation less expensive. Using this approach, the intensity of design modification per update step is increased and thus, the whole design cycle accelerated. However, design modifications are often limited by constraints essential for the industrial realization. Accordingly, constraints related to the porosity modification are investigated. The insertion of an approximated regularization term into the cost functional promotes a sparse distributed control which is in some cases preferred.
The approaches are validated by fluid dynamic test cases. The arising framework is applied to heating, ventilation and air-conditioning components. It is shown that the adjoint method is a powerful and efficient technique to optimize heating, ventilation and air-conditioning components.
10:30 – 11:00
Coffee break
11:00 – 11:30
From unit cube to real world problem
Scientific software development is facing a general problem. Either it is developed for an academic use case only (the unit cube for instance). Or it is hacked together due to lack of time during PhD study. In neither case students are following principles of software engineering. After all, the software is just used as is for a single case or even worse, it is constantly extended with new hacks, thus at some point in time becoming unreadable and unmaintainable.
Additional aspects, such as usage of external libraries, heritage code (mostly Fortran), choise of I/O formats, debugging, diagnostic or configuration (provision of run time options) render good scientific software development troublesome. Last but not least, the choice of programming language will always be crucial.
During the talk we will address two real world problems as examples. One from the field of climate research regarding simulation and optimization of marine ecosystem models. The second from machine learning concerning training and application of neural nerworks.
Additional aspects, such as usage of external libraries, heritage code (mostly Fortran), choise of I/O formats, debugging, diagnostic or configuration (provision of run time options) render good scientific software development troublesome. Last but not least, the choice of programming language will always be crucial.
During the talk we will address two real world problems as examples. One from the field of climate research regarding simulation and optimization of marine ecosystem models. The second from machine learning concerning training and application of neural nerworks.
11:30 – 12:00
Parallel in Time Computation with the ECHAM6 Climate Model
In the last years the increase in speed of single cores slowed down, while the number of cores growth exceed the number of useful cores for spatial parallelization. So Parallel in Time Computing may be useful for further speeding up computations on very large time horizons, if additional cores are available.
I will present first results of Parallel in Time Computations using the ECHAM6 Climate Model and discuss the convergence of the Parallel in Time iteration and the speed up compared to the conventional serial in time computation with the ECHAM6 Climate Model.
I will present first results of Parallel in Time Computations using the ECHAM6 Climate Model and discuss the convergence of the Parallel in Time iteration and the speed up compared to the conventional serial in time computation with the ECHAM6 Climate Model.
12:00 – 12:30
Anti-diffusive flux corrections for high order finite volume transport schemes
12:30 – 13:30
Lunch
13:30 – 14:00
One-Bit Compressed Sensing on Manifolds
14:00 – 14:30
From Circular Road to Infinite Lane: Stability Results for Microscopic Optimal Velocity Models
Microscopic traffic models are often considered on circular roads, that is, under periodic boundary conditions. While this approach is often justified by the apparent resemblance of the resulting phenomena to reality, the fact that it implies drivers are influenced indirectly by their own actions at an earlier point of time might lead to non-natural behaviour. From this point of view, an open road setting seems conceptually more desirable. Here, on the other hand, in order to prevent information from leaving the system on travelling upstream, an infinite number of cars is necessary, making analysis much more demanding. This is also true if one tries to find a microscopic system corresponding to a certain macroscopic model, typically defined without periodic boundary conditions. By considering suitable infinite-dimensional microscopic systems as the limiting case for a rising number of cars on circles of constant density, results can be transferred from one setting to the other. In particular, adequate notions of stability and the connection between Hopf periodic solutions on the circular road and jam waves on the infinite lane will be discussed.
14:30 – 15:00
Well-posedness of Prony’s Problem
15:00 – 15:30
Coffee break
15:30 – 16:00
Efficient numerical treatment of multivariate population balance equations
16:00 – 16:30
H-matrix preconditioners for scattered data approximation
16:30 – 17:00
Toward stable computations in RBF interpolation problems
19:00
Dinner at Restaurant „Alte Schwimmhalle“
Thursday, March 29
9:00 – 10:00
Plenary Lecture:
Optimal control of a regularized fracture propagation problem
In this talk, we will discuss an optimal control problem governed by a regularized phase field fracture propagation problem. We will address in detail the steps leading to the regularized problem formulation, starting with a discussion of models for the process of fracture propagation, via an initial control problem in form of a bilevel optimization problem with inequality constraints in the lower level problem to the final regularized formulation. Then, results regarding existence of solutions as well as optimality conditions are presented.
10:00 – 10:30
Optimal control of the fractional Laplace equation
In the first part of this talk we briefly summarize results for the fractional Laplace equation, using the spectral definition. Especially we discuss its numerical approximation using the finite element method together with the recently proposed Balakrishnan representation [Bonito, Pasciak 2016] of the inverse operator. We propose an efficient method to solve the resulting linear systems and discuss certain technical aspects of the implementation.
In the second part of the talk we consider the optimal control problem of the fractional Laplace equation, using a standard tracking-type objective. The control is acting as distributed volume force and fulfills typical box constraints.
We discuss three commonly used discretization techniques for the control, while the state and adjoint state are discretized using piecewise linear elements. Namely, we consider a discretization of the control using piecewise constant functions, adding a post-processing step, and variational discretization. We derive rates of convergence for the numerical approximations of the optimal control, state and adjoint state. A numerical validation of the results is given.
This work is cooperation with Stefan Dohr (TU Graz), Piotr Swierczynski (TU München) and Sergejs Rogovs (UniBW München).
In the second part of the talk we consider the optimal control problem of the fractional Laplace equation, using a standard tracking-type objective. The control is acting as distributed volume force and fulfills typical box constraints.
We discuss three commonly used discretization techniques for the control, while the state and adjoint state are discretized using piecewise linear elements. Namely, we consider a discretization of the control using piecewise constant functions, adding a post-processing step, and variational discretization. We derive rates of convergence for the numerical approximations of the optimal control, state and adjoint state. A numerical validation of the results is given.
This work is cooperation with Stefan Dohr (TU Graz), Piotr Swierczynski (TU München) and Sergejs Rogovs (UniBW München).
10:30 – 11:00
Coffee break
11:00 – 11:30
A fully certified reduced basis method for optimal control of PDEs with control constraints (joint work with Ahmad Ahmad Ali)
11:30 – 12:00
Combining POD Model Order Reduction with Adaptivity
A crucial challenge within snapshot-based POD model order reduction for time-dependent systems lies in the input dependency. In the 'offline phase', the POD basis is computed from snapshot data obtained by solving the high-fidelity model at several time instances. If a dynamical structure is not captured by the snapshots, this feature will be missing in the ROM solution. Thus, the quality of the POD approximation can only ever be as good as the input material. In this sense, the accuracy of the POD surrogate solution is restricted by how well the snapshots represent the underlying dynamical system.
If one restricts the snapshot sampling process to uniform and static discretizations, this may lead to very fine resolutions and thus large-scale systems which are expensive to solve or even can not be realized numerically. Therefore, offline adaptation strategies are introduced which aim to filter out the key dynamics. On the one hand, snapshot location strategies detect suitable time instances at which the snapshots shall be generated. On the other hand, adaptivity with respect to space enables us to resolve important structures within the spatial domain. Motivated from an infinite-dimensional perspective, we explain how POD in Hilbert spaces can be implemented. The advantage of this approach is that it only requires the snapshots to lie in a common Hilbert space. This results in a great flexibility concerning the actual discretization technique, such that we even can consider r-adaptive snapshots or a blend of snapshots stemming from different discretization methods. Moreover, in the context of optimal control problems adaptive strategies are crucial in order to adjust the POD model according to the current optimization iterate.
In this talk, recent results in this direction are discussed and illustrated by numerical experiments.
If one restricts the snapshot sampling process to uniform and static discretizations, this may lead to very fine resolutions and thus large-scale systems which are expensive to solve or even can not be realized numerically. Therefore, offline adaptation strategies are introduced which aim to filter out the key dynamics. On the one hand, snapshot location strategies detect suitable time instances at which the snapshots shall be generated. On the other hand, adaptivity with respect to space enables us to resolve important structures within the spatial domain. Motivated from an infinite-dimensional perspective, we explain how POD in Hilbert spaces can be implemented. The advantage of this approach is that it only requires the snapshots to lie in a common Hilbert space. This results in a great flexibility concerning the actual discretization technique, such that we even can consider r-adaptive snapshots or a blend of snapshots stemming from different discretization methods. Moreover, in the context of optimal control problems adaptive strategies are crucial in order to adjust the POD model according to the current optimization iterate.
In this talk, recent results in this direction are discussed and illustrated by numerical experiments.
12:00 – 12:30
Kernel matrices with off-diagonal decay
12:30 – 13:30
Lunch
13:30
Departure