Vorträge 11 bis 20 von 426 | Gesamtansicht
|09.01.20||14:00||Am Schwarzenberg-Campus 3 (E), Raum 3.074||
A tractable approach for 1-bit compressed sensing on manifolds
Sara Krause-Solberg, Institut für Mathematik (E-10), Lehrstuhl Angewandte Analysis
Compressed Sensing deals with reconstructing some unknown vector from few linear measurements in high dimension by additionally assuming sparsity, i.e. many entries are zero. Recent results guaranteed recovery even when just signs of the measurements are available (one-bit CS). A natural generalization of classical CS replaces sparse vectors by vectors lying on manifolds having low intrinsic dimension. In this talk I introduce the one-bit problem and proposes a tractable strategy to solve one-bit CS problems for data lying on manifolds. This is based on joint work with Johannes Maly and Mark Iwen.
|19.12.19||14:00||Am Schwarzenberg-Campus 3 (E), Raum 3.074||
Parallel-in-Time PDE-constrained Optimization*
Dr. Sebastian Götschel, Zuse Institut Berlin (ZIB)
Large-scale optimization problems governed by partial differential equations (PDEs) occur in a multitude of applications, for example in inverse problems for non-destructive testing of materials and structures, or in individualized medicine. Algorithms for the numerical solution of such PDE-constrained optimization problems are computationally extremely demanding, as they require multiple PDE solves during the iterative optimization process. This is especially challenging for transient problems, where methods working on the reduced objective functional are often employed to avoid a full spatio-temporal discretization of the associated optimality system. The evaluation of the reduced gradient then requires one solve of the state equation forward in time, and one backward-in-time solve of the adjoint equation. In order to tackle real-life applications, it is not only essential to devise efficient discretization schemes, but also to use advanced techniques to exploit computer architectures and decrease the time-to-solution, which otherwise is prohibitively long.
One approach is to utilize the increasing number of CPU cores available in current computers. In addition to more common spatial parallelization, time-parallel methods are receiving increasing interest in the last years. There, iterative multilevel schemes such as PFASST (Parallel Full Approximation Scheme in Space and Time) are currently state of the art and achieve significant parallel efficiency. In this talk, we investigate approaches to use PFASST for the solution of parabolic optimal control problems. Besides enabling time parallelism, the iterative nature of the temporal integrators within PFASST provides additional flexibility for reducing the cost of solving nonlinear equations, re-using previous solutions in the optimization loop, and adapting the accuracy of state and adjoint solves to the optimization progress. We discuss benefits and difficulties, and present numerical examples.
This is joint work with Michael Minion (Lawrence Berkeley National Lab).
|16.12.19||13:00||Am Schwarzenberg-Campus 3 (E), Raum 3.074||
Präkonditionierer für lineare Systeme aus RBF-FD diskretisierten partiellen Differentialgleichungen (Bachelorarbeit)
|12.12.19||14:00||Am Schwarzenberg-Campus 3 (E), Raum 3.074||
Molecular-Continuum Flow Simulation with MaMiCo: Where HPC and Data Analytics Meet
Prof. Dr. Philipp Neumann, Helmut-Schmidt-Universität
Molecular-continuum methods, as referred to in my talk, employ a domain decomposition and compute fluid flow either by means of molecular dynamics (MD) or computational fluid dynamics (CFD) in the sub-domains. This enables multiscale investigations of nano- and microflows beyond the limits of validity of classical CFD.
In my talk, I will focus on latest developments in the macro-micro-coupling tool (MaMiCo). MaMiCo enables the coupling of arbitrary CFD and MD solvers, hiding the entire coupling algorithmics from the actual single-scale solvers. After a brief discussion of the limits of the MD method, I will focus on various aspects of the molecular-continuum coupling and its realization in MaMiCo, including parallelization, multi-instance sampling for MD (that is ensemble averaging) and filtering methods that extract smooth responses from the fluctuating MD description to enhance consistency on the side of the continuum solver. I will further present preliminary results from a study which aims to generate open boundary force models for MD using machine learning.
|05.12.19||14:00||Am Schwarzenberg-Campus 3 (E), Raum 3.074||
A new approach to the QR decomposition of hierarchical matrices
All existing QR decompositions for hierarchical matrices suffer from numerical drawbacks that limit their use in many applications. In this talk, I will present a new method based on the recursive WY-based QR decomposition by Elmroth and Gustavson. It is an extension of an already existing method for a subclass of hierarchical methods developed by Kressner and Susnjara.
I will try to keep things as simple as possible and give a short introduction to hierarchical matrices as well. Previous knowledge of hierarchical matrices is not necessary to understand the basic ideas and main obstacles of the new algorithm.
|26.11.19||17:00||Am Schwarzenberg-Campus 5 (H), Raum H0.10||
Two-scale convergence for evolutionary equations
Marcus Moppi Waurick, Department of Mathematics and Statistics, University of Strathclyde, Livingstone Tower, 26 Richmond Street, Glasgow G1 1XH, Scotland, Room number: LT1007
In the talk, we shall develop a general framework for the treatment of both deterministic and stochastic homogenisation problems for evolutionary equations. The versatility of the methods allow the unified treatment of static, dynamic as well as mixed type problems.
|21.11.19||14:00||Am Schwarzenberg-Campus 3 (E), Raum 3.074||
Parallel-in-time integration with PFASST: from prototyping to applications
Robert Speck, Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, 52428 Jülich
The efficient use of modern supercomputers has become one of the key challenges in computational science. New mathematical concepts are needed to fully exploit massively parallel architectures. For the numerical solution of time-dependent processes, time-parallel methods have opened new ways to overcome scaling limits. With the "parallel full approximation scheme in space and time" (PFASST), multiple time-steps can be integrated simultaneously. Based on spectral deferred corrections (SDC) methods and nonlinear multigrid ideas, PFASST uses a space-time hierarchy with various coarsening strategies to maximize parallel efficiency. In numerous studies, this approach has been used on up to 448K cores and coupled to space-parallel solvers with finite differences, spectral methods or even articles for discretization in space. Yet, since the integration of SDC or PFASST into an existing application code is not straightforward and the potential gain is typically uncertain, we will present in this talk our Python prototyping framework pySDC. It allows to rapidly test new ideas and to implement first toy problems more easily. We will also discuss the transition from pySDC to application-specific implementations and show recent use cases.
|18.11.19||14:15||Am Schwarzenberg-Campus 3 (E), Raum 3.074||
Das verbesserte Produkt Hierarchischer Matrizen durch Verwendung von erweiterten Summen-Ausdrücken (Masterarbeit)
|14.11.19||14:00||Am Schwarzenberg-Campus 3 (E), Raum 3.074||
Where are my ions? A new algorithms to track fast ions in the magnetic field of a fusion reactor
Daniel Ruprecht, TUHH, Institut für Mathematik, Lehrstuhl für Computational Mathematics, Am Schwarzenberg-Campus 3, Gebäude E, 21073 Hamburg
The plasma in a fusion reactor is heated by neutral beam injection: injecting high energy neutrons which quickly ionize and swirl around in the reactor's magnetic fiel. Modelling this process requires solving the Lorentz equations numerically over long times (up to a second) with very small time steps (order of nanoseconds), which means very many time steps and thus long simulation times (from days up to a week). The talk will introduce GMRES-Boris-SDC (GBSDC), a new time stepping algorithm that can reduce computational cost compared to the currently used Boris method. The method is a potpourri of various numerical techniques, including the GMRES linear solver, spectral deferred corrections, the velocity Verlet scheme and the Boris trick. I will describe the algorithm and show examples of its performance for benchmarks with varying degree of realism.
This is joint work with Dr Krasymyr Tretiak, School of Mathematics, University of Leeds.
|12.11.19||15:15||Am Schwarzenberg-Campus 3 (E), Raum 3.074||
Project presentations of Canadian interns
Josiah Vandewetering and Braeden Syrnyk
During their work-term at TUHH the two Canadian students worked on projects relating to current research in the institute.
As their term comes to an end they will present their ongoing work in short talks.
* Vortrag im Rahmen des Kolloquiums für Angewandte Mathematik