**2023** |
---|

Solving Nonlinear Finite Element Problems in Elasticity Lina Fesefeldt 12/06/2023, 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09 Abstract: Finite element methods (FEM) for displacement problems in elasticity lead to systems of nonlinear equations. These equations are usually solved with Newton's method or a related method. Based on a benchmark problem in high-order FEM, we explore traditional solution techniques for the nonlinear equation system such as step width selection and Quasi-Newton methods. We also consider algorithms specifically designed for displacement problems in nonlinear structural analysis like load step and arc-length methods. We extend traditional load step methods to a new approach exploiting the hierarchical structure of the problem and saving about 50% of computation time (vs. benchmark). In an outlook, we discuss new developments in nonlinear preconditioning and their applicability to displacement problems in nonlinear FEM. |

Parallel-In-Time Integration with Applications to Real World Problems from Electrical Engineering Prof. Sebastian Schöps TU-Darmstadt, 11/08/2023, 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09 Abstract: Time-domain simulation of large-scale problems becomes computationally prohibitive if space-parallelization saturates. This is particularly challenging if long time periods are considered, e.g., if the start-up of an electrical machine until steady state is simulated. In this contribution, several parallel-in-time methods are discussed for initial-boundary-value problems and for time-periodic boundary value problems. All those methods are based on a subdivision of the time interval into as many subintervals as computing cores are available. For example, the well-known parareal method works similarly to multiple shooting methods; it solves two types of problems iteratively until convergence is reached: a cheap problem defined on coarse grids is solved sequentially on the whole time-interval to propagate initial conditions (and approximate derivatives) and secondly, high-fidelity problems are solved on the subintervals in parallel. We also discuss Paraexp and Waveform Relaxation methods in the context of real world engineering problems from electrical engineering. Additional information about the author: https://www.cem.tu-darmstadt.de/cem/group/ref_group_details_27328.de.jsp |

Physics Informed Neural Networks for the Lorentz Equations Finn Sommer 11/01/2023, 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09 Abstract: Physics Informed Neural Networks (PINNs) are becoming increasingly important in solving initial and boundary value problems. In contrast to conventional neural networks, they do not require labelled data for training and can thus be assigned to the field of unsupervised learning [3]. In this work, a PINN is to be trained to learn the equation of motion of a charged particle in an electromagnetic field. It turns out that networks trained using the L-BFGS opimisation algorithm show better convergence behaviour than those trained using the Adam optimisation algorithm commonly used in deep learning. In addition, it turns out that pre-training neural networks on the solution of a numerical method such as the Crank-Nicolson method can significantly speed up the training of PINNS. |

Parareal with a physics informed neural network as coarse propagator Abdul Qadir Ibrahim 10/25/2023, 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09 Abstract: Parallel-in-time algorithms provide an additional layer of concurrency for the numerical integration of models based on time-dependent differential equations. Methods like Parareal, which parallelize across multiple time steps, rely on a computationally cheap and coarse integrator to propagate information forward in time, while a parallelizable expensive fine propagator provides accuracy. Typically, the coarse method is a numerical integrator using lower resolution, reduced order or a simplified model. Our reasearch proposes to use a physics-informed neural network (PINN) instead. We demonstrate for the Black-Scholes equation, a partial differential equation from computational finance, that Parareal with a PINN coarse propagator provides better speedup than a numerical coarse propagator. Training and evaluating a neural network are both tasks whose computing patterns are well suited for GPUs. By contrast, mesh-based algorithms with their low computational intensity struggle to perform well. We show that moving the coarse propagator PINN to a GPU while running the numerical fine propagator on the CPU further improves Parareal's single-node performance. This suggests that integrating machine learning techniques into parallel-in-time integration methods and exploiting their differences in computing patterns might offer a way to better utilize heterogeneous architectures. |

Physics-Constrained Deep Learning for Downscaling and Emulation Paula Harder Fraunhofer ITWM, 10/10/2023, 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09 Abstract: The availability of reliable, high-resolution climate and weather data is important to inform long-term decisions on climate adaptation and mitigation and to guide rapid responses to extreme events. Forecasting models are limited by computational costs and, therefore, often generate coarse-resolution predictions. Two common ways to decrease computational efforts with DL are downscaling, the increase of the resolution directly on the predicted climate variables, and emulation, the replacement of model parts to achieve faster runs initially. Here, we look at several downscaling tasks and an aerosol emulation problem. While deep learning shows promising results it may not obey simple physical constraints, such as mass conservation or mass positivity. We tackle this by investigating both soft and hard constraining methodologies in different setups, showing that incorporating hard constraints can be beneficial for both downscaling and emulation problems. |

Harnessing the Power of GPUs: A Path to Efficiency and Excellence Prof. Sohan Lal Massively Parallel Systems Group, 09/27/2023, 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09 Abstract: Graphics Processing Units (GPUs), initially designed as accelerators for graphics applications, have revolutionized the computing landscape with their unparalleled computational prowess. Today, GPU-accelerated systems are present everywhere – for example, in our smartphones, cars, and supercomputers. GPU-accelerated systems are transforming the world in many ways, and several exciting possibilities, such as digital twins and precision medicine are on the horizon. While GPU-accelerated systems are desirable, their optimal utilization is crucial; otherwise, they can be very expensive in terms of power and energy consumption, which is not good as we aspire to reduce our carbon footprint. A single GPU can draw up to 700 watts, while GPU-powered supercomputers scale to the energy-hungry range of 1 to 10 megawatts.
In this presentation, I will talk about the performance, power, and energy efficiency of GPUs. I will present a GPU power simulator that we developed to estimate the power and energy efficiency of GPUs and show how we can use the simulator to investigate bottlenecks that cause low performance and low energy efficiency, highlighting the wide gap between the achieved energy efficiency of GPUs and the energy-efficiency aim of exascale computing.
Finally, I will briefly highlight two ongoing projects aimed at harnessing GPUs effectively within High-Performance Computing (HPC) clusters.
In the first project, we are developing techniques to predict the scalability of applications on HPC clusters. The project aims to automatically choose the best number of nodes for an application depending on its scalability. In the second project, we are developing a tool to enable automatic optimization of HPC applications on NVIDIA Hopper (and the next generation) GPUs. As we navigate the intricate interplay of performance, power, and energy efficiency, we embark on a quest to maximize the transformative potential of GPUs while minimizing their environmental footprint. Additional information about the author: https://www.mps.tuhh.de/ |

Upper bound on Parareal with spatial-coarsening Ausra Pogozelskyte University of Geneva 07/25/2023, 02:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09 Abstract: Parareal is the most studied Parallel-in-Time method; by introducing parallelism in the time dimension, it allows to relieve communication bottlenecks that appear when parallelism is used only in the spatial dimension.
An expensive part of Parareal is the sequential solve using the coarse operator. So, for performance reasons, it can be interesting to consider the sequential operator not only on a coarser grid in time but also in space.
In this talk, we will discuss an alternative approach to the Generating Function Method (GFM) for computing Parareal bounds and how it can be used to compute linear and superlinear bounds.
We will then extend the analysis to Parareal with spatial-coarsening (coarsening factor 2 in space and time) and discuss the associated challenges. Finally, numerical results for the heat equation will be provided. Additional information about the author: https://unige.ch/~pogozels/ |

Efficient and robust numerical methods based on adaptivity and structure preservation Prof. Hendrik Ranocha AM – Angewandte Mathematik, Universität Hamburg 07/05/2023, 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 Abstract: We present some recent developments for the numerical simulation of
transport-dominated problems such as compressible fluid flows and
nonlinear dispersive wave equations. We begin with a brief review
of modern entropy-stable semidiscretizations of hyperbolic conservation
laws and use the method of lines to obtain efficient, fully discrete
numerical methods. Next, we introduce means to preserve the entropy
structures also under time discretization. Therefore, we present the
relaxation approach, a recent technique developed as small modifications
of standard time integration schemes such as Runge-Kutta or linear
multistep methods, which is designed to preserve the conservation or
dissipation of important functionals of the solution. This can be an
entropy in the case of compressible fluid flows, the energy of
Hamiltonian problems, or another nonlinear invariant. Additional information about the author: https://www.math.uni-hamburg.de/forschung/bereiche/am/struct-num/personen/ranocha-hendrik.html |

Machine learning for weather and climate modelling Peter Düben European Centre for Medium-Range Weather Forecasts, 01/23/2023, 03:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 Abstract: This talk will start with a high-level overview on how machine learning can be used to improve weather and climate predictions. Afterwards, the talk will provide more detail on recent developments of machine learned weather forecast models and how they compare to conventional models and numerical methods. |