Vorträge
Vorträge 111 bis 120 von 746 | Gesamtansicht
|
|
|
|
| Datum | Zeit | Ort | Vortrag |
|---|---|---|---|
| 14.02.24 | 12:00 | Am Schwarzenberg-Campus 3 (E), Raum 3.074 und Zoom |
Training Large Language Models on High-Performance Computing Systems Chelsea John, Forschungszentrum Jülich This presentation explores the intricacies of training large language models (LLM) on High-Performance Computing (HPC) systems, unveiling the key components, challenges, and optimizations involved in handling the computational demands of state-of-the-art language models. Delving into the nuances of model architecture, data preprocessing, and hyperparameter tuning, a comprehensive understanding of parallelization strategies, scalability challenges, and resource allocation will be given. Additionally, the talk touches on the implications for research, highlighting potential progress and future applications of LLMs. Zoomlink: |
| 02.02.24 | 14:00 | Am Schwarzenberg-Campus 3 (E), Raum 3.074 |
Dimension estimation [Studienarbeit] Michel Krispin |
| 24.01.24 | 13:00 | TUHH, Am Schwarzenberg-Campus 3 (E), Raum 3.074 |
Sampling Theorems in Positive Definite Reproducing Kernel Hilbert Spaces [Bachelorarbeit] Lennart Ohlsen, Studiengang TM, Betreuer und Erstprüfer: Armin Iske |
| 24.01.24 | 12:00 | Am Schwarzenberg-Campus 3 (E), Raum 3.074 und Zoom |
Low-synchronization techniques for communication reduction in Krylov subspace methods* Kathryn Lund, Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg With exascale-capable supercomputers already on the horizon, reducing communication operations in orthogonalization kernels like QR factorization has become even more imperative. Low-synchronization Gram-Schmidt methods, first introduced in Swirydowicz et al. (Numer. Lin. Alg. Appl. 28(2):e2343, 2020), have been shown to improve the scalability of the Arnoldi method in high-performance, distributed computing. Block versions of low-synchronization Gram-Schmidt show further potential for speeding up algorithms, as column-batching allows for maximizing cache usage with matrix-matrix operations. We will examine how low-synchronization block Gram-Schmidt variants can be transformed into block Arnoldi variants for use in standard Krylov subspace methods like block generalized minimal residual methods (BGMRES). We also demonstrate how an adaptive restarting heuristic can handle instabilities that arise with the increasing condition number of the Krylov basis. The performance, accuracy, and stability of these methods are assessed via a flexible comparison tool written in MATLAB. Zoomlink: |
| 15.01.24 | 11:00 | Am Schwarzenberg-Campus 3 (E), Raum 3.074 und Zoom |
Development of a Conversational Interface Based on Institution-Specific Documentation through LLM Finetuning [Projektarbeit] Philip Suskin Zoomlink: |
| 10.01.24 | 12:00 | Am Schwarzenberg-Campus 3 (E), Raum 3.074 und Zoom |
A scalar inverse problem with Neural Galerkin Scheme* Djahou Norbert Tognon, Sorbonne Universite Neural networks trained with machine learning techniques are currently attracting great attention as nonlinear approximation methods to solve forward and inverse problems involving high-dimensional partial differential equations (PDEs). In a recent paper, Neural Galerkin scheme has been proposed to solve PDEs by means of deep learning. In this approach, the deep learning process generates the training data samples with an active learning process for the numerical approximation. We apply this approach in this talk to tackle a parameter estimation problem and propose an algorithm based on Neural Galerkin scheme to estimate a scalar coefficient involved in a non-linear PDE problem. We provide numerical results with Korteweg-de Vries (KdV) equation in one dimension. Zoomlink: |
| 09.01.24 | 15:00 | Am Schwarzenberg-Campus 3 (E), Raum 3.074 |
Data-Driven Approaches for the Maxey-Riley Equation [Masterarbeit] Niklas Dieckow |
| 08.01.24 | 16:00 | Am Schwarzenberg-Campus 3 (E), Raum 3.074 |
Approximation methods in sequence spaces Riko Ukena, E-10, Am Schwarzenberg-Campus 3 (E), Raum 3.074 We discuss approximation methods for linear equations in sequence spaces. When cutting out a finite matrix from an infinite dimensional operator, a choice of boundary conditions has to be made. Choosing zero boundary conditions leads to the classical finite section method, for which conditions for the applicability are known. We derive similar conditions for the applicability for the choice of periodic boundary conditions. |
| 21.12.23 | 17:00 | Am Schwarzenberg-Campus 3 (E), Raum 3.074 |
Kürzeste Pfadlänge in K-Nearest-Neighbor-Graphen [Bachelorarbeit] Ali Maznouk |
| 20.12.23 | 17:00 | Am Schwarzenberg-Campus 3 (E), Raum 3.074 |
Gaussian upper heat kernel bounds on graphs Christian Rose, Universität Potsdam tba |
* Vortrag im Rahmen des Kolloquiums für Angewandte Mathematik





