Highlights:

  • New method ‘Equivariance by Contrast (EbC)’ enables learning equivariant representations directly from unlabeled data.
  • Developed by Tobias Schmidt, Steffen Schneider, and Matthias Bethge, and accepted at NeurIPS 2025.
  • Achieves faithful reproduction of complex group symmetries, including non-abelian groups, in the latent space.
  • Validated on both synthetic and structured datasets, including infinite dSprites and orthogonal group O(n).

TLDR:

Researchers Tobias Schmidt, Steffen Schneider, and Matthias Bethge unveil ‘Equivariance by Contrast’ (EbC), a general-purpose machine learning framework that learns symmetric structure from unlabeled data. By removing the need for group-specific assumptions, EbC marks a major step forward in encoder-only representation learning for complex group transformations.

A major breakthrough in machine learning research was unveiled with the introduction of ‘Equivariance by Contrast’ (EbC), a novel approach that allows neural networks to learn equivariant embeddings directly from unlabeled data. Presented by researchers Tobias Schmidt, Steffen Schneider, and Matthias Bethge at NeurIPS 2025, this innovation represents a crucial step in teaching models to understand symmetry and transformations within data without the need for costly labeled supervision or predefined structural assumptions.

Equivariance — the property that a model’s internal representation changes predictably under transformations of its input — is vital for achieving robust and interpretable representations. EbC tackles this challenge by leveraging *contrastive learning* from observation pairs (y, g·y), where g denotes an element from a finite group acting on a sample y. The algorithm concurrently learns a latent representation and a group representation in which transformations correspond to invertible linear maps. This design ensures high-fidelity equivariance, meaning that the learned latent space faithfully replicates how transformations apply to the original data.

The authors tested EbC on the infinite dSprites dataset, focusing on transformations defined by the combined group structure G = (R_m × ℤ_n × ℤ_n), representing discrete rotations and periodic translations. The method accurately modeled these symmetries within latent space, setting new benchmarks for equivariant embedding learning. Beyond this, EbC was validated on complex synthetic settings involving non-abelian groups like the orthogonal group O(n) and the general linear group GL(n), demonstrating its flexibility and general-purpose potential. Importantly, the team also provided a theoretical proof ensuring the identifiability of the learned representations — a mathematical guarantee rarely achieved in contrastive representation learning.

By enabling encoder-only equivariant learning from raw group action observations, EbC eliminates reliance on hand-coded symmetry priors or architecture-specific designs. This advancement opens doors for generalizable machine learning systems that can infer physical or geometric structure directly from data — a crucial capability for computer vision, robotics, and physics-informed AI. Although further testing on real-world datasets is planned, the authors highlight that EbC marks the first successful demonstration of identifying complex group structures, including non-trivial non-abelian ones, purely through contrastive learning.

Source:

Source:

Schmidt, T., Schneider, S., & Bethge, M. (2025). ‘Equivariance by Contrast: Identifiable Equivariant Embeddings from Unlabeled Finite Group Actions.’ arXiv:2510.21706 [cs.LG]. DOI: https://doi.org/10.48550/arXiv.2510.21706

Leave a Reply

Your email address will not be published. Required fields are marked *