OpenAI Researcher Quits, Saying Company Is Hiding the Truth

Futurism — AIFriday, December 12, 2025 at 6:13:50 PM
OpenAI Researcher Quits, Saying Company Is Hiding the Truth
  • An OpenAI researcher has resigned, alleging that the company is concealing potentially damaging research findings, which raises concerns about transparency and accountability within the organization. This resignation highlights internal dissent regarding the handling of critical information related to AI development.
  • The departure of the researcher underscores significant challenges for OpenAI, particularly as it faces increasing scrutiny over its practices and the ethical implications of its AI technologies. The company's reputation may be at stake as it navigates these allegations.
  • This incident reflects broader tensions in the AI industry, where companies are grappling with the balance between innovation and ethical responsibility. As competition intensifies, particularly from rivals like Google's Gemini 3, the pressure on OpenAI to maintain transparency and address potential risks associated with its technologies is mounting.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
OpenAI built an AI coding agent and uses it to improve the agent itself
PositiveArtificial Intelligence
OpenAI has developed an AI coding agent known as Codex, which is primarily responsible for its own enhancements, showcasing a self-improving technology. This innovation highlights OpenAI's commitment to advancing AI capabilities in coding and software engineering.
Hegseth’s New Pentagon AI Is Telling Military Personnel His Boat Strike Was Completely Illegal
NegativeArtificial Intelligence
The Pentagon's new AI system, associated with Pete Hegseth, has reportedly informed military personnel that an order to kill two survivors from a boat strike was illegal, emphasizing the necessity for service members to disobey such commands. This raises significant legal and ethical concerns regarding military operations and the role of AI in decision-making processes.
OpenAI signs deal to bring Disney characters to Sora and ChatGPT
NeutralArtificial Intelligence
OpenAI has signed a significant deal with The Walt Disney Company to integrate Disney characters into its Sora video-making platform and ChatGPT, following a $1 billion investment from Disney. This partnership allows for the use of iconic characters like Mickey Mouse and Cinderella in AI-generated content, enhancing user engagement and creative possibilities.
I tested GPT-5.2 and the AI model's mixed results raise tough questions
NeutralArtificial Intelligence
OpenAI has launched GPT-5.2, its latest AI model, which has shown mixed results in tests compared to its predecessor, GPT-5.1. The model was subjected to a series of text and image challenges, raising questions about the value for Plus subscribers.
Instacart Caught Using AI to Charge Wildly Different Prices for the Same Item
NegativeArtificial Intelligence
Instacart has been found to use artificial intelligence to charge different prices for the same grocery items, with discrepancies reaching up to 23 percent. This practice was highlighted in a recent study involving around 200 shoppers across four U.S. cities, confirming that the company is conducting short-term pricing tests.
Nvidia Chip on Satellite in Orbit Trains First AI Model in Space
PositiveArtificial Intelligence
Nvidia has successfully trained its first AI model in space using a chip aboard a satellite, demonstrating the potential for advanced computing capabilities beyond Earth. This achievement signifies a major milestone in the integration of artificial intelligence with space technology.
Oracle Delays Some Data Center Projects for OpenAI to 2028
NegativeArtificial Intelligence
Oracle Corp. has delayed the completion of several data center projects for OpenAI, pushing the timeline from 2027 to 2028, as reported by sources familiar with the situation. This decision reflects ongoing challenges in the partnership between the two companies.
Weaponized AI risk is 'high,' warns OpenAI - here's the plan to stop it
NegativeArtificial Intelligence
OpenAI has raised alarms regarding the high risk of weaponized AI, emphasizing the need to evaluate when AI models can either assist or obstruct cybersecurity efforts. The company is actively working on measures to protect its models from potential misuse by cybercriminals.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about