Who is to blame when AI goes wrong? Study points to shared responsibility

Phys.org — AI & Machine LearningTuesday, November 25, 2025 at 9:38:28 PM
Who is to blame when AI goes wrong? Study points to shared responsibility
  • A recent study highlights the challenge of assigning responsibility when artificial intelligence (AI) systems malfunction, emphasizing that AI's lack of consciousness complicates accountability. As AI becomes more integrated into daily life, the question of who is liable for errors becomes increasingly pressing.
  • This development is significant as it underscores the need for clear frameworks and regulations regarding AI usage, which could impact industries heavily reliant on AI technologies. Establishing accountability is crucial for fostering trust and ensuring ethical AI deployment.
  • The discourse surrounding AI accountability intersects with broader concerns about the rapid advancements in AI technology, including its potential to disrupt traditional writing and communication practices, and the unintended consequences of AI systems, such as promoting misinformation or conspiracy theories.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
What’s coming up at #AAAI2026?
NeutralArtificial Intelligence
The Annual AAAI Conference on Artificial Intelligence is set to take place in Singapore from January 20 to January 27, marking the first time the event is held outside North America. This 40th edition will include invited talks, tutorials, workshops, and a comprehensive technical program, highlighting the global significance of AI advancements.
New framework helps AI systems recover from mistakes and find optimal solutions
NeutralArtificial Intelligence
A new framework has been developed to assist AI systems in recovering from errors and optimizing solutions, addressing common issues like AI 'brain fog' where systems lose track of conversation context. This advancement aims to enhance the reliability and effectiveness of AI interactions.
An Under-Explored Application for Explainable Multimodal Misogyny Detection in code-mixed Hindi-English
PositiveArtificial Intelligence
A new study has introduced a multimodal and explainable web application designed to detect misogyny in code-mixed Hindi and English, utilizing advanced artificial intelligence models like XLM-RoBERTa. This application aims to enhance the interpretability of hate speech detection, which is crucial in the context of increasing online misogyny.
A Novel Approach to Explainable AI with Quantized Active Ingredients in Decision Making
PositiveArtificial Intelligence
A novel approach to explainable artificial intelligence (AI) has been proposed, leveraging Quantum Boltzmann Machines (QBMs) and Classical Boltzmann Machines (CBMs) to enhance decision-making transparency. This framework utilizes gradient-based saliency maps and SHAP for feature attribution, addressing the critical challenge of explainability in high-stakes domains like healthcare and finance.
AI could be your next line manager
PositiveArtificial Intelligence
Artificial intelligence (AI) is increasingly taking on significant roles in various sectors, with capabilities that include producing academic papers, enhancing space exploration, and developing medical treatments. This trend suggests a shift towards AI potentially serving as line managers in workplaces, reflecting its growing influence in decision-making processes.
From brain scans to alloys: Teaching AI to make sense of complex research data
NeutralArtificial Intelligence
Artificial intelligence (AI) is being increasingly utilized to analyze complex data across various fields, including medical imaging and materials science. However, many AI systems face challenges when real-world data diverges from ideal conditions, leading to issues with accuracy and reliability due to varying measurement qualities.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about