Geometric Deep Learning attempts to apply deel learning methodlogies to domains where a grid structure doesn't exit. We advocate for using principled methods to define the primitives of these networks. As such we define networks that stem from the symmetries of geometric representations, And show how analyzing some of these primitives spectrally reveals that combining allows for SOTA performance. This is finally complemented by the introduction of a suite of identity losses that are customizable and induced from geometric quantities.