Jose Dolz
Speaker : Jose Dolz is an Associate Professor in the Department of Software and IT Engineering at the ETS Montreal. Prior to be appointed Professor, I was a post-doctoral fellow at the same institution. I obtained my B.Sc and M.Sc in the Polytechnic University of Valencia, Spain, and his Ph.D. at the University of Lille 2, France, in 2016. I was recipient of a Marie-Curie FP7 Fellowship (2013-2016) to pursue my doctoral studies. My current research focuses on deep learning, medical imaging, optimization and learning strategies with limited supervision. Up to date, I have (co-)authored over 80 fully peer-reviewed papers, many of which published in the top venues in medical imaging (MICCAI/IPMI/MedIA/TMI/NeuroImage), computer vision (CVPR, ICCV, ECCV) and machine learning (ICML, NeurIPS). Furthermore, I have given 5 tutorials on learning with limited supervision at MICCAI (2019-2022) and ICPR(2022), and one infoundation models (MICCAI 2024) participated in the organization of three summer schools in Deep Learning for Medical Imaging and recognized several times as Outstanding Reviewer (MICCAI’20,ECCV’20, CVPR’21, CVPR’22, NeurIPS’22, ICCV’23).
Date : May 21, 2025, at 11AM
Location : Vieussens, 7th Floor, 45 Rue des Saints Pères 75006 Paris
Title : Modeling the uncertainty quantification in the era of modern deep networks
Abstract : In spite of the dominant performances of deep neural networks, recent works have shown that they are poorly calibrated, resulting in over-confident predictions. Miscalibration can be exacerbated by overfitting due to the minimization of the cross-entropy during training, as it promotes the predicted softmax probabilities to match the one-hot label assignments. This yields a pre-softmax activation of the correct class that is significantly larger than the remaining activations, potentially leading to overconfident predictions, even when the predicted class is incorrect. This talk aims at presenting popular methods that have been proposed to tackle the miscalibration issue, notably during training time, and understand their impact on the uncertainty estimates provided by the deep models. In particular, we will delve into the use of regularizers, for example in the form of penalty terms, to better model the uncertainty of the network predictions in two fundamental tasks in computer vision: classification and segmentation. Furthermore, I will present how class-wise constraints can be integrated in these problems, presenting simple yet efficient solutions from an optimization standpoint. Last, inspired by the rising popularity of large scale pre-trained vision-language models, we will investigate the impact of several adaptation strategies, such as parameter-efficient adapters, prompt learning and test-time prompt tuning, on the calibration performance compared to zeros-shot CLIP predictions.
Javiera Castillo Navarro
Speaker : Javiera Castillo Navarro is an Assistant Professor (Maître de Conférences in French) in the Vertigo team at CNAM, Paris. Previously, she was a postdoctoral researcher at EPFL, Switzerland, in the ECEO lab, advised by Devis Tuia. She obtained her PhD in computer science under the supervision of Sébastien Lefèvre, Bertrand Le Saux, and Alexandre Boulch, where she developed semi-supervised learning methods for semantic segmentation and classification.
Date : March 21, 2025, at 2PM
Location : Vieussens B Room, 7th Floor, 45 Rue des Saints Pères 75006 Paris
Title : What to align in Multimodal Contrastive Learning?
Abstract : Humans perceive the world through multisensory integration, blending the information of different modalities to adapt their behavior. Contrastive learning offers an appealing solution for multimodal self-supervised learning. Indeed, by considering each modality as a different view of the same entity, it learns to align features of different modalities in a shared representation space. However, this approach is intrinsically limited as it only learns shared or redundant information between modalities, while multimodal interactions can arise in other ways. In this work, we introduce CoMM, a Contrastive Multimodal learning strategy that enables the communication between modalities in a single multimodal space. Instead of imposing cross- or intra- modality constraints, we propose to align multimodal representations by maximizing the mutual information between augmented versions of these multimodal features. Our theoretical analysis shows that shared, synergistic, and unique terms of information naturally emerge from this formulation, allowing us to estimate multimodal interactions beyond redundancy. We test CoMM both in a controlled and in a series of real-world settings: in the former, we demonstrate that CoMM effectively captures redundant, unique, and synergistic information between modalities. In the latter, CoMM learns complex multimodal interactions and achieves state-of-the-art results on the six multimodal benchmarks.