Interview with Alice Xiang: Fair human-centric image dataset for ethical AI benchmarking
PositiveArtificial Intelligence

- Earlier this month, Sony AI launched the Fair Human-Centric Image Benchmark (FHIBE), a dataset aimed at setting a new standard for ethical AI in computer vision. This dataset, which includes over 10,000 diverse human images, is the first publicly available, consent-based resource for assessing bias in AI models, with the research published in Nature.
- The introduction of FHIBE is significant as it addresses the growing concern over bias in AI systems, providing researchers and developers with a reliable tool for ethical benchmarking. This initiative positions Sony AI as a leader in promoting responsible AI practices in the industry.
— via World Pulse Now AI Editorial System
