– Europe/Lisbon
Room P3.10, Mathematics Building
— Online

Invariant and Equivariant Functional Neural Networks I
Traditional neural networks prioritize generalization, but this flexibility often leads to geometrically inconsistent transformations of input data. To account for variations in object pose—such as rotations or translations—models are typically trained on large, augmented datasets. This increases computational cost and complicates learning.
We propose an alternative: neural networks that are inherently invariant or equivariant to geometric transformations by design. Such models would produce consistent outputs regardless of an object’s pose, eliminating the need for data augmentation. This approach can potentially extend to a broad range of transformations beyond just rotation and translation.
To realize this, we use geometric algebra, where operations like the geometric product are naturally equivariant under pseudo-orthogonal transformations, represented by the group SO(4,1). By building neural networks on top of this algebra, we can ensure transformation-aware computation.
Additionally, we address permutation invariance in point clouds. Instead of treating them as unordered sets of vectors, we represent them functionally—as sums of Dirac delta functions—analogous to sampled signals. This avoids point ordering issues entirely and offers a more structured geometric representation.
This leads us to functional neural networks, where the input is a function rather than a vector list, and layers are continuous operators rather than discrete ones like ReLU or linear layers. Constructed within geometric algebra, these networks naturally maintain the desired invariant and equivariant properties.