Seminars

Planned seminars

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt — Online

António Leitão
, Scuola Normale Superiore di Pisa

How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.

References

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt

António Leitão, Scuola Normale Superiore di Pisa

How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.

References

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt

Paulo Mourão
, Sapienza University of Rome

The Hopfield Neural Network has played, ever since its introduction in 1982 by John Hopfield, a fundamental role in the inter-disciplinary study of storage and retrieval capabilities of neural networks, further highlighted by the recent 2024 Physics Nobel Prize.

From its strong link with biological pattern retrieval mechanisms to its high-capacity Dense Associative Memory variants and connections to generative models, the Hopfield Neural Network has found relevance both in Neuroscience, as well as the most modern of AI systems.

Much of our theoretical knowledge of these systems however, comes from a surprising and powerful link with Statistical Mechanics, first established and explored in seminal works of Amit, Gutfreund and Sompolinsky in the second half of the 1980s: the interpretation of associative memories as spin-glass systems.

In this talk, we will present this duality, as well as the mathematical techniques from spin-glass systems that allow us to accurately and rigorously predict the behavior of different types of associative memories, capable of undertaking various different tasks.

Europe/Lisbon
Room P3.10, Mathematics Building Instituto Superior Técnicohttps://tecnico.ulisboa.pt

, Sapienza University of Rome

The Hopfield Neural Network has played, ever since its introduction in 1982 by John Hopfield, a fundamental role in the inter-disciplinary study of storage and retrieval capabilities of neural networks, further highlighted by the recent 2024 Physics Nobel Prize.

From its strong link with biological pattern retrieval mechanisms to its high-capacity Dense Associative Memory variants and connections to generative models, the Hopfield Neural Network has found relevance both in Neuroscience, as well as the most modern of AI systems.

Much of our theoretical knowledge of these systems however, comes from a surprising and powerful link with Statistical Mechanics, first established and explored in seminal works of Amit, Gutfreund and Sompolinsky in the second half of the 1980s: the interpretation of associative memories as spin-glass systems.

In this talk, we will present this duality, as well as the mathematical techniques from spin-glass systems that allow us to accurately and rigorously predict the behavior of different types of associative memories, capable of undertaking various different tasks.