Exposing Hidden Biases in Text-to-Image Models via Automated Prompt Search

arXiv — cs.LGWednesday, December 10, 2025 at 5:00:00 AM
  • A new framework called Bias-Guided Prompt Search (BGPS) has been introduced to automatically generate prompts that maximize biases in images produced by text-to-image (TTI) diffusion models. This development addresses the persistent social biases related to gender, race, and age that these models exhibit, despite previous debiasing efforts.
  • The introduction of BGPS is significant as it highlights the limitations of existing methods that rely on curated prompt datasets, which may overlook subtle prompts that trigger biases. By automating the prompt generation process, BGPS aims to enhance the understanding and mitigation of biases in TTI models.
  • This advancement is part of a broader discourse on improving text-to-image generation technologies, with ongoing efforts to address structural distortions, enhance spatial consistency, and reduce biases. The integration of frameworks like BGPS alongside other innovations reflects a growing recognition of the ethical implications and technical challenges in AI-generated imagery.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
ControlVP: Interactive Geometric Refinement of AI-Generated Images with Consistent Vanishing Points
PositiveArtificial Intelligence
ControlVP has been introduced as a user-guided framework aimed at correcting geometric inconsistencies in AI-generated images, particularly addressing the issue of vanishing point inconsistencies that affect spatial realism in generated scenes. This development enhances the structural integrity of images produced by models like Stable Diffusion.
RepLDM: Reprogramming Pretrained Latent Diffusion Models for High-Quality, High-Efficiency, High-Resolution Image Generation
PositiveArtificial Intelligence
The introduction of RepLDM, a reprogramming framework for pretrained latent diffusion models, aims to enhance high-resolution image generation while addressing the structural distortions often encountered in existing models like Stable Diffusion. This framework operates in two stages: an attention guidance stage for improved structural consistency and a progressive upsampling stage for resolution enhancement.