| ||||
| ||||
![]() Title:FORML: Learning to Reweight Data for Fairness Conference:DataPerf 2022 Tags:data reweighting, dataset condensation, fairness, meta-learning and robustness Abstract: Machine learning models are trained to minimize the mean loss for a single metric, and thus typically do not consider fairness and robustness. Neglecting such metrics in training can make these models prone to fairness violations when training data are imbalanced. This work introduces Fairness Optimized Reweighting via Meta-Learning (FORML), a training algorithm that balances fairness and robustness with accuracy by jointly learning training sample weights and neural network parameters. The approach increases model fairness by learning to balance the contributions from both over- and under-represented sub-groups through dynamic reweighting of the data. FORML improves equality of opportunity fairness criteria on image classification tasks, reduces bias of corrupted labels, and facilitates data condensation for building smaller and more fair datasets. These improvements are achieved without pre-processing data or post-processing model outputs, without learning an additional weighting function, without changing model architecture, and while maintaining accuracy on the original predictive metric. FORML: Learning to Reweight Data for Fairness ![]() FORML: Learning to Reweight Data for Fairness | ||||
Copyright © 2002 – 2025 EasyChair |