Enhancing Learning with Noisy Labels via Rockafellian Relaxation
Published in The Fourteenth International Conference on Learning Representations (ICLR 2026), 2026
Labeling errors in datasets are common, arising in a variety of contexts, such as human labeling and weak labeling. Although neural networks (NNs) can tolerate modest amounts of these errors, their performance degrades substantially once the label error rate exceeds a certain threshold. We propose the Rockafellian Relaxation Method (RRM)—an architecture-independent, loss reweighting approach to enhance the capacity of neural network methods to accommodate noisy labeled data. More precisely, it functions as a wrapper, modifying any methodology’s training loss, particularly the supervised component. Experiments indicate RRM can provide accuracy improvements across classification tasks in computer vision and natural language processing (sentiment analysis). This potential gain holds irrespective of dataset size, noise generation (synthetic or human), data domain, and adversarial perturbation.
Recommended citation: Louis Chen, Bobbie Chern, Eric Eckstrand, Amogh Mahapatra, Johannes Royset (2026). "Enhancing Learning with Noisy Labels via Rockafellian Relaxation."
