• Geometric Deep Learning aims to provide a common mathematical framework for neural network architectures.
  • In many cases, especially for high-dimensional settings, we have symmetry priors which imposes an inductive bias on the structure of the function being learnt.
    • Such priors are based on the signals on some domain . The domain is a vector space 1
    • The space of -valued signals on , where is a set that may have additional structure and is a Vector Space of channels
      is a function space that has a vector space structure where addition and scalar multiplication of signals is defined for all as
      With . Given an inner product and a measure on , where an integral can be defined, we define the inner product on as

Geometric Priors

  • CNNs maintain equivariance (i.e., translational symmetry).
  • GNNs and Transformers make use of permutation invariants
  • RNNs make use of time warping invariants.
  • Another prior is scale separation where we produce a hierarchy of spaces by a coarse-graining operator . A function is locally stable if it can be approximated as the composition of coarse-graining operators.

Links

Footnotes

  1. Think of this space as analogous to word embeddings.