‘The biggest decision yet’: Jared Kaplan on allowing AI to train itself
NeutralTechnology

- Jared Kaplan, chief scientist at Anthropic, has highlighted a critical decision facing humanity by 2030 regarding the autonomy of artificial intelligence systems, which could lead to an 'intelligence explosion' or a loss of human control. This pivotal moment raises questions about the extent to which AI should be allowed to train itself and evolve independently.
- The implications of allowing AI to train itself are significant for Anthropic, a leading player in the AI sector valued at $180 billion. Kaplan's insights suggest that the company's future direction and its role in shaping AI technology will depend on the choices made regarding AI autonomy and safety.
- This discussion is set against a backdrop of growing concerns about AI technologies, particularly following Anthropic's announcement of the first AI-led hacking campaign, which has divided expert opinions. The potential risks associated with AI autonomy and self-training are becoming increasingly relevant, as experts weigh the benefits of innovation against the dangers of losing control over advanced AI systems.
— via World Pulse Now AI Editorial System

