There are many machine learning problems which should have symmetries: a picture of a cow rotated 135 degrees is still a picture of a cow, the meaning of spoken words shouldn't change with the audio level, etc. If they were doing machine learning on tracks from the LHC the system ought to take account of relativistic momentum and energy.
Can a model learn a symmetry? Or should a symmetry just be built into the model from the beginning?
Equivariant machine learning is a thing that people have tried... Tends to be expensive and slow, though, and imposes invariances that our model (a universal function approximator, recall) should just learn anyway: If you don't have enough pictures of upside down cows, just train a normal model with augmentations.
There are many machine learning problems which should have symmetries: a picture of a cow rotated 135 degrees is still a picture of a cow, the meaning of spoken words shouldn't change with the audio level, etc. If they were doing machine learning on tracks from the LHC the system ought to take account of relativistic momentum and energy.
Can a model learn a symmetry? Or should a symmetry just be built into the model from the beginning?