Dataset-level societal bias mitigation with text-to-image model
Author(s):
Yusuke Hirota, Jerone Andrews, Dora Zhao, Orestis Papakyriakopoulos, Apostolos Modas, Alice Xiang
Abstract:
Systems and methods are used to mitigate societal bias in image-text datasets by removing spurious correlations between protected groups and image attributes. Using text-guided inpainting models, the methods ensures protected group independence from all attributes and mitigates inpainting biases through data filtering. Evaluations on multi-label image classification and image captioning tasks show that the methods effectively reduce bias without compromising performance across various models.