- Geometric Deep Learning aims to provide a common mathematical framework for neural network architectures.
- In many cases, especially for high-dimensional settings, we have symmetry priors which imposes an inductive bias on the structure of the function being learnt.
- Such priors are based on the signals on some domain
. The domain is a vector space 1 - The space of
-valued signals on , where is a set that may have additional structure and is a Vector Space of channels is a function space that has a vector space structure where addition and scalar multiplication of signals is defined for allas With. Given an inner product and a measure on , where an integral can be defined, we define the inner product on as
- Such priors are based on the signals on some domain
Geometric Priors
- CNNs maintain equivariance (i.e., translational symmetry).
- GNNs and Transformers make use of permutation invariants
- RNNs make use of time warping invariants.
- Another prior is scale separation where we produce a hierarchy of spaces by a coarse-graining operator
. A function is locally stable if it can be approximated as the composition of coarse-graining operators.
Links
- Geometric Deep Learning
- Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges by Bronstein et al
- Michael Bronstein - YouTube channel of the author from Geometric Deep Learning. Geometric.
- Geometry - GDL aims to mimic the unification of various geometries by studying the invariants of these geometries.
Footnotes
-
Think of this space as analogous to word embeddings. ↩