Hamburg University of Technology / Institute of Mathematics / Talks German flag

Talks

Search | Managament of Talks (German)

Talks 111 to 120 of 746 | show all

First page Previous page Next page Last page
Date Time Venue Talk
02/14/24 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom Training Large Language Models on High-Performance Computing Systems
Chelsea John, Forschungszentrum Jülich

This presentation explores the intricacies of training large language models (LLM) on High-Performance Computing (HPC) systems, unveiling the key components, challenges, and optimizations involved in handling the computational demands of state-of-the-art language models. Delving into the nuances of model architecture, data preprocessing, and hyperparameter tuning, a comprehensive understanding of parallelization strategies, scalability challenges, and resource allocation will be given. Additionally, the talk touches on the implications for research, highlighting potential progress and future applications of LLMs.

Zoomlink:
https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09

Symbol: Arrow up
02/02/24 02:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 Dimension estimation [Studienarbeit]
Michel Krispin

Symbol: Arrow up
01/24/24 01:00 pm TUHH, Am Schwarzenberg-Campus 3 (E), Room 3.074 Sampling Theorems in Positive Definite Reproducing Kernel Hilbert Spaces [Bachelorarbeit]
Lennart Ohlsen, Studiengang TM, Betreuer und Erstprüfer: Armin Iske

Symbol: Arrow up
01/24/24 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom Low-synchronization techniques for communication reduction in Krylov subspace methods*
Kathryn Lund, Max Planck Institute for Dynamics of Complex Technical Systems, Magdeburg

With exascale-capable supercomputers already on the horizon, reducing communication operations in orthogonalization kernels like QR factorization has become even more imperative. Low-synchronization Gram-Schmidt methods, first introduced in Swirydowicz et al. (Numer. Lin. Alg. Appl. 28(2):e2343, 2020), have been shown to improve the scalability of the Arnoldi method in high-performance, distributed computing. Block versions of low-synchronization Gram-Schmidt show further potential for speeding up algorithms, as column-batching allows for maximizing cache usage with matrix-matrix operations. We will examine how low-synchronization block Gram-Schmidt variants can be transformed into block Arnoldi variants for use in standard Krylov subspace methods like block generalized minimal residual methods (BGMRES). We also demonstrate how an adaptive restarting heuristic can handle instabilities that arise with the increasing condition number of the Krylov basis. The performance, accuracy, and stability of these methods are assessed via a flexible comparison tool written in MATLAB.

Zoomlink:
https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09

Symbol: Arrow up
01/15/24 11:00 am Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom Development of a Conversational Interface Based on Institution-Specific Documentation through LLM Finetuning [Projektarbeit]
Philip Suskin

Zoomlink:
https://tuhh.zoom.us/j/81325639377?pwd=emRwaU9KOXhseStxUEU2M2NFS0Qwdz09

Symbol: Arrow up
01/10/24 12:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 and Zoom A scalar inverse problem with Neural Galerkin Scheme*
Djahou Norbert Tognon, Sorbonne Universite

Neural networks trained with machine learning techniques are currently attracting great attention as nonlinear approximation methods to solve forward and inverse problems involving high-dimensional partial differential equations (PDEs). In a recent paper, Neural Galerkin scheme has been proposed to solve PDEs by means of deep learning. In this approach, the deep learning process generates the training data samples with an active learning process for the numerical approximation. We apply this approach in this talk to tackle a parameter estimation problem and propose an algorithm based on Neural Galerkin scheme to estimate a scalar coefficient involved in a non-linear PDE problem. We provide numerical results with Korteweg-de Vries (KdV) equation in one dimension.

Zoomlink:
https://tuhh.zoom.us/j/81920578609?pwd=TjBmYldRdXVDT1VkamZmc1BOajREZz09

Symbol: Arrow up
01/09/24 03:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 Data-Driven Approaches for the Maxey-Riley Equation [Masterarbeit]
Niklas Dieckow

Symbol: Arrow up
01/08/24 04:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 Approximation methods in sequence spaces
Riko Ukena, E-10, Am Schwarzenberg-Campus 3 (E), Raum 3.074

We discuss approximation methods for linear equations in sequence spaces. When cutting out a finite matrix from an infinite dimensional operator, a choice of boundary conditions has to be made. Choosing zero boundary conditions leads to the classical finite section method, for which conditions for the applicability are known. We derive similar conditions for the applicability for the choice of periodic boundary conditions.
As an important tool, we demonstrate a way to approximate spectral quantities of an infinite dimensional operator with the help of finitely supported vectors.
Moreover, we investigate discrete Schrödinger operators and find conditions for the applicability of the finite section method.

This talk gives an overview of the results obtained in my PhD under the supervision of Prof. Dr. Marko Lindner.

Zoom link: https://tuhh.zoom.us/j/8757671580?pwd=ZjgyYURxYWxrQmJjaUVtTE5uTnBHUT09

Symbol: Arrow up
12/21/23 05:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 Kürzeste Pfadlänge in K-Nearest-Neighbor-Graphen [Bachelorarbeit]
Ali Maznouk

Symbol: Arrow up
12/20/23 05:00 pm Am Schwarzenberg-Campus 3 (E), Room 3.074 Gaussian upper heat kernel bounds on graphs
Christian Rose, Universität Potsdam

tba

Symbol: Arrow up

* Talk within the Colloquium on Applied Mathematics