Fairness for the People, by the People: Minority Collective Action
PositiveArtificial Intelligence
Machine learning models often reflect biases found in their training data, resulting in unfair treatment of minority groups. While various bias mitigation techniques exist, they typically involve utility costs and require organizational support. This article introduces the concept of Algorithmic Collective Action, where end-users from minority groups can collaboratively relabel their data to promote fairness without changing the firm's training process. Three model-agnostic methods for effective relabeling are proposed and validated on real-world datasets, demonstrating that a minority subgroup can significantly reduce unfairness with minimal impact on prediction error.
— via World Pulse Now AI Editorial System

