Dataset-level societal bias mitigation with text-to-image model
Systems and methods are used to mitigate societal bias in image-text datasets by removing spurious correlations between protected groups and image attributes. Using text-guided inpainting models, the methods ensures protected group independence from all attributes and mitigates inpainting biases through data filtering. Evaluations on multi-label image classification and image captioning tasks show that the methods effectively reduce bias without compromising performance across various models.
2025
KI-Agenten zwischen Kontrolle und Subjektivierung: Zur Produktion neuer Machtverhältnisse
Workshop Handeln, Wissen und Macht in Mensch-Maschine- Interaktionen
2025
Multilingual Human-Centered Alignment to Mitigate Gender Bias in LLMs
AI Action Summit
Participatory AI Research & Practice Symposium
2025
A Sociotechnical Perspective on Aligning AI with Pluralistic Human Values
ICLR 2025 Workshop on Bidirectional Human-AI Alignment
2025