Javiera Castillo Navarro
Speaker : Javiera Castillo Navarro is an Assistant Professor (Maître de Conférences in French) in the Vertigo team at CNAM, Paris. Previously, she was a postdoctoral researcher at EPFL, Switzerland, in the ECEO lab, advised by Devis Tuia. She obtained her PhD in computer science under the supervision of Sébastien Lefèvre, Bertrand Le Saux, and Alexandre Boulch, where she developed semi-supervised learning methods for semantic segmentation and classification.
Date : March 21, 2025, at 2PM
Location : Vieussens B Room, 7th Floor, 45 Rue des Saints Pères 75006 Paris
Title : What to align in Multimodal Contrastive Learning?
Abstract : Humans perceive the world through multisensory integration, blending the information of different modalities to adapt their behavior. Contrastive learning offers an appealing solution for multimodal self-supervised learning. Indeed, by considering each modality as a different view of the same entity, it learns to align features of different modalities in a shared representation space. However, this approach is intrinsically limited as it only learns shared or redundant information between modalities, while multimodal interactions can arise in other ways. In this work, we introduce CoMM, a Contrastive Multimodal learning strategy that enables the communication between modalities in a single multimodal space. Instead of imposing cross- or intra- modality constraints, we propose to align multimodal representations by maximizing the mutual information between augmented versions of these multimodal features. Our theoretical analysis shows that shared, synergistic, and unique terms of information naturally emerge from this formulation, allowing us to estimate multimodal interactions beyond redundancy. We test CoMM both in a controlled and in a series of real-world settings: in the former, we demonstrate that CoMM effectively captures redundant, unique, and synergistic information between modalities. In the latter, CoMM learns complex multimodal interactions and achieves state-of-the-art results on the six multimodal benchmarks.