Aalborg Universitet
Mini-Workshop on Machine Learning and Algebraic Geometry

Thomas Manns Vej 25 (Innovation Building), room A.121
03.06.2026 13:00 - 18:00
English
Hybrid
Thomas Manns Vej 25 (Innovation Building), room A.121
03.06.2026 13:00 - 18:00
English
Hybrid
Aalborg Universitet
Mini-Workshop on Machine Learning and Algebraic Geometry

Thomas Manns Vej 25 (Innovation Building), room A.121
03.06.2026 13:00 - 18:00
English
Hybrid
Thomas Manns Vej 25 (Innovation Building), room A.121
03.06.2026 13:00 - 18:00
English
Hybrid
We will have the following five talks on Wednesday, June 3.
13:00-13:45 Ruriko Yoshida
Title: Tropical Algebraic Approaches to Deep Learning
Abstract: Tropical algebra, also known as max-plus algebra, replaces classical addition with maximization and classical multiplication with addition. The associated geometry, called tropical geometry, is fundamentally polyhedral, making it a natural framework for applications in combinatorial optimization, machine learning, and deep learning.
In this talk, we first review recent developments in applying tropical algebraic geometry to the study of the geometric structure of deep neural networks. We then discuss tropical geometry approaches to deep learning models, including tropical embedding layers and tropical attention mechanisms. Attention mechanisms are central components of modern AI models, such as transformers and reasoning models. We explore how the “tropicalization” of attention mechanisms can outperform traditional approaches, and we highlight the key challenges that arise in developing tropical attention methods.
14:00-14:45 (Online Talk) Samir Bhatt
Title: Aspects of Neural optimal transport
Abstract: In this talk, I will introduce recent work on neural optimal transport: using neural networks to model transport potentials and facilitate Monge–Kantorovich optimal transport. I will begin with a brief overview of the optimal transport problem, then discuss new results showing how neural approaches to the Monge formulation can help overcome the curse of dimensionality. Finally, I will present a regularized version of this approach and illustrate its utility in a simple application: refining molecular poses in docking. I will state the main theoretical results and discuss their implications without dwelling on technical details, aiming to make the talk as accessible and enjoyable as possible.
14:50-15:10 Coffee break
15:15-16:00 Baran Hashemi
Title: Closed-Loop Generative Models for Extremal Geometry
Abstract: Beyond theorem proving and formal verification, AI can also act as a scientific instrument for exploring the geometry of mathematical solution spaces. In this talk I will present FlowBoost, a closed-loop generative framework for discovering extremal structures in continuous and constrained optimization problems. The method combines conditional flow matching, geometry-aware sampling, reward-guided policy updates, and stochastic local search to learn a sampler concentrated near rare high-quality configurations while keeping generated objects feasible throughout the search. This is especially useful in extremal geometry and combinatorics, where the landscape is highly nonconvex, gradients are unavailable or unreliable, and brute-force enumeration is impossible. The broader message is that, for many problems in experimental mathematics, closed-loop generative search with geometric inductive bias can be a practical alternative to both hand-crafted heuristics and LLM-based evolutionary systems, suggesting a concrete route toward de novo mathematical structure design.
16:00-16:45 Kurt Pasque
Title: Tropical Attention: Neural Algorithmic Reasoning for Combinatorial Algorithms
Abstract: Can algebraic geometry enhance the sharpness, robustness, and interpretability of modern neural reasoning models by equipping them with a mathematically grounded inductive bias? To answer this, we introduce Tropical Attention, an attention mechanism grounded in tropical geometry that lifts the attention kernel into tropical projective space, where reasoning is piecewise-linear and 1-Lipschitz, thus preserving the polyhedral decision structure inherent to combinatorial reasoning. Weprove that Multi-Head Tropical Attention (MHTA) stacks universally approximate tropical circuits and realize tropical transitive closure through composition, achieving polynomial resource bounds without invoking recurrent mechanisms. These guarantees explain why the induced polyhedral decision boundaries remain sharp and scale-invariant, rather than smoothed by Softmax. Empirically, we show that Tropical Attention delivers stronger out-of-distribution generalization in both length and value, with high robustness against perturbative noise, and substantially faster inference with fewer parameters compared to Softmax-based and recurrent attention baselines, respectively. For the first time, we push the domain of neural algorithmic reasoning beyond PTIME problems to NP-hard/complete problems, paving the way toward sharper and more expressive Large Reasoning Models (LRMs) capable of tackling complex combinatorial challenges in Phylogenetics, Cryptography, Particle Physics, and Mathematical Discovery. The code is available at https://github.com/Baran-phys/Tropical-Attention/.
17:00-17:45 Maksym Zubkov
Title: Mathematics for AI: What Does Algebraic Geometry Bring to the Table?
Abstract: In this talk, we will explore (neuro)algebraic geometry, an emerging field analogous to algebraic statistics that uses algebraic geometry to study the theory of deep learning. The universal approximation theorem tells us that, with sufficiently large hidden widths, a shallow neural network can approximate any continuous function on a compact set arbitrarily well. Rather than letting the width tend to infinity, we fix a feedforward neural network architecture with polynomial or rational activation functions and ask about its expressivity. To study this question, we associate to a fixed architecture a geometric object called a neuromanifold: the image of the parameter space of a neural network in an ambient space of functions. Because the activation function is algebraic, we can choose this ambient space to be finite-dimensional. This allows us to take the Zariski closure of the neuromanifold over the real or complex numbers, and hence to associate an algebraic variety to the given architecture. I will present recent results showing how the study of neurovarieties arising from shallow polynomial and rational neural networks is connected to several classical questions in algebraic geometry, including simultaneous Waring rank, Chow varieties, and Weyl’s conjectures. On the other hand, moving beyond shallow architectures brings into focus a rich collection of classical algebro-geometric objects whose study may help us better understand the geometry underlying deep learning theory. More broadly, the talk will illustrate how algebro-geometric tools complement the more familiar statistical and probabilistic approaches to the mathematical foundations of deep learning.