Seminars

Planned seminars

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt

Francisco Vasconcelos
Francisco Vasconcelos, ISR & Instituto Superior Técnico

Traditional neural networks prioritize generalization, but this flexibility often leads to geometrically inconsistent transformations of input data. To account for variations in object pose—such as rotations or translations—models are typically trained on large, augmented datasets. This increases computational cost and complicates learning.

We propose an alternative: neural networks that are inherently invariant or equivariant to geometric transformations by design. Such models would produce consistent outputs regardless of an object’s pose, eliminating the need for data augmentation. This approach can potentially extend to a broad range of transformations beyond just rotation and translation.

To realize this, we use geometric algebra, where operations like the geometric product are naturally equivariant under pseudo-orthogonal transformations, represented by the group SO(4,1). By building neural networks on top of this algebra, we can ensure transformation-aware computation.

Additionally, we address permutation invariance in point clouds. Instead of treating them as unordered sets of vectors, we represent them functionally—as sums of Dirac delta functions—analogous to sampled signals. This avoids point ordering issues entirely and offers a more structured geometric representation.

This leads us to functional neural networks, where the input is a function rather than a vector list, and layers are continuous operators rather than discrete ones like ReLU or linear layers. Constructed within geometric algebra, these networks naturally maintain the desired invariant and equivariant properties.

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt

Francisco Vasconcelos, ISR & Instituto Superior Técnico

Traditional neural networks prioritize generalization, but this flexibility often leads to geometrically inconsistent transformations of input data. To account for variations in object pose—such as rotations or translations—models are typically trained on large, augmented datasets. This increases computational cost and complicates learning.

We propose an alternative: neural networks that are inherently invariant or equivariant to geometric transformations by design. Such models would produce consistent outputs regardless of an object’s pose, eliminating the need for data augmentation. This approach can potentially extend to a broad range of transformations beyond just rotation and translation.

To realize this, we use geometric algebra, where operations like the geometric product are naturally equivariant under pseudo-orthogonal transformations, represented by the group SO(4,1). By building neural networks on top of this algebra, we can ensure transformation-aware computation.

Additionally, we address permutation invariance in point clouds. Instead of treating them as unordered sets of vectors, we represent them functionally—as sums of Dirac delta functions—analogous to sampled signals. This avoids point ordering issues entirely and offers a more structured geometric representation.

This leads us to functional neural networks, where the input is a function rather than a vector list, and layers are continuous operators rather than discrete ones like ReLU or linear layers. Constructed within geometric algebra, these networks naturally maintain the desired invariant and equivariant properties.

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt — Online

António Leitão
, Scuola Normale Superiore di Pisa

How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.

References:
Ballester et all, Topological Data Analysis for Neural Network Analysis: A Comprehensive Survey
Papamrakou et al, Position: Topological Deep Learning is the New Frontier for Relational Learning
Petri and Leitão, On The Topological Expressive Power of Neural Networks

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt

António Leitão, Scuola Normale Superiore di Pisa

How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.

References:
Ballester et all, Topological Data Analysis for Neural Network Analysis: A Comprehensive Survey
Papamrakou et al, Position: Topological Deep Learning is the New Frontier for Relational Learning
Petri and Leitão, On The Topological Expressive Power of Neural Networks

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt

Paulo Mourão
, Sapienza University of Rome

The Hopfield Neural Network has played, ever since its introduction in 1982 by John Hopfield, a fundamental role in the inter-disciplinary study of storage and retrieval capabilities of neural networks, further highlighted by the recent 2024 Physics Nobel Prize.

From its strong link with biological pattern retrieval mechanisms to its high-capacity Dense Associative Memory variants and connections to generative models, the Hopfield Neural Network has found relevance both in Neuroscience, as well as the most modern of AI systems.

Much of our theoretical knowledge of these systems however, comes from a surprising and powerful link with Statistical Mechanics, first established and explored in seminal works of Amit, Gutfreund and Sompolinsky in the second half of the 1980s: the interpretation of associative memories as spin-glass systems.

In this talk, we will present this duality, as well as the mathematical techniques from spin-glass systems that allow us to accurately and rigorously predict the behavior of different types of associative memories, capable of undertaking various different tasks.

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt

, Sapienza University of Rome

The Hopfield Neural Network has played, ever since its introduction in 1982 by John Hopfield, a fundamental role in the inter-disciplinary study of storage and retrieval capabilities of neural networks, further highlighted by the recent 2024 Physics Nobel Prize.

From its strong link with biological pattern retrieval mechanisms to its high-capacity Dense Associative Memory variants and connections to generative models, the Hopfield Neural Network has found relevance both in Neuroscience, as well as the most modern of AI systems.

Much of our theoretical knowledge of these systems however, comes from a surprising and powerful link with Statistical Mechanics, first established and explored in seminal works of Amit, Gutfreund and Sompolinsky in the second half of the 1980s: the interpretation of associative memories as spin-glass systems.

In this talk, we will present this duality, as well as the mathematical techniques from spin-glass systems that allow us to accurately and rigorously predict the behavior of different types of associative memories, capable of undertaking various different tasks.