Extracting memorized pieces of (copyrighted) books from open-weight language models

arXiv — cs.LGTuesday, December 2, 2025 at 5:00:00 AM
  • A recent study has examined the memorization of copyrighted texts by open-weight language models (LLMs), revealing that while most models do not memorize entire books, some, like Llama 3.1 70B, have fully memorized specific works such as the first Harry Potter book and 1984. This research utilized a probabilistic extraction technique across 50 books and 17 models to assess the extent of memorization.
  • This development is significant as it highlights the complexities of copyright law in relation to generative AI, where claims about memorization can influence ongoing legal disputes. Understanding the memorization capabilities of LLMs is crucial for addressing copyright infringement concerns and shaping future regulations.
  • The findings contribute to a broader discourse on the ethical implications of AI in creative fields, particularly regarding the balance between innovation and intellectual property rights. As LLMs evolve, discussions around their alignment with copyright laws and the potential for data leakage become increasingly pertinent, reflecting ongoing debates about the responsibilities of AI developers.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
End-to-End Multi-Person Pose Estimation with Pose-Aware Video Transformer
PositiveArtificial Intelligence
A new end-to-end framework for multi-person 2D pose estimation in videos has been introduced, eliminating the reliance on heuristic operations that limit accuracy and efficiency. This framework, named Pose-Aware Video transformEr Network (PAVE-Net), effectively associates individuals across frames, addressing the challenges of complex and overlapping trajectories in video data.
Walk Before You Dance: High-fidelity and Editable Dance Synthesis via Generative Masked Motion Prior
PositiveArtificial Intelligence
Recent advancements in dance generation have led to the development of a novel approach that utilizes a generative masked text-to-motion model to synthesize high-quality 3D dance motions. This method addresses significant challenges such as realism, dance-music synchronization, and motion diversity, while also enabling semantic motion editing capabilities.
Fast 3D Surrogate Modeling for Data Center Thermal Management
PositiveArtificial Intelligence
A new framework for fast 3D surrogate modeling has been developed to enhance thermal management in data centers, focusing on real-time temperature predictions that are crucial for energy efficiency and sustainability. This approach utilizes a voxelized representation of the data center, integrating various operational parameters such as server workloads and HVAC settings.
Context-Enriched Contrastive Loss: Enhancing Presentation of Inherent Sample Connections in Contrastive Learning Framework
PositiveArtificial Intelligence
A new paper introduces a context-enriched contrastive loss function aimed at improving the effectiveness of contrastive learning frameworks. This approach addresses the issue of information distortion that arises from augmented samples, which can lead to models over-relying on identical label information while neglecting positive pairs from the same image. The proposed method incorporates two convergence targets to enhance learning outcomes.
Boosting Medical Vision-Language Pretraining via Momentum Self-Distillation under Limited Computing Resources
PositiveArtificial Intelligence
A new study has introduced a method for enhancing medical Vision-Language Models (VLMs) through momentum self-distillation, addressing the challenges posed by limited computing resources and the scarcity of detailed annotations in healthcare. This approach aims to improve the efficiency of training VLMs, allowing them to perform well even with small datasets or in zero-shot scenarios.
Basis-Oriented Low-rank Transfer for Few-Shot and Test-Time Adaptation
PositiveArtificial Intelligence
A new framework called Basis-Oriented Low-rank Transfer (BOLT) has been proposed to enhance the adaptation of large pre-trained models to unseen tasks with minimal additional training. This method focuses on extracting an orthogonal, task-informed spectral basis from existing fine-tuned models, allowing for efficient adaptation in both offline and online phases.
HouseLayout3D: A Benchmark and Training-Free Baseline for 3D Layout Estimation in the Wild
PositiveArtificial Intelligence
HouseLayout3D has been introduced as a benchmark for 3D layout estimation, addressing limitations of existing models that primarily rely on synthetic datasets. This new benchmark supports the estimation of layouts in complex multi-floor buildings, which are often overlooked in current methodologies.
TGDD: Trajectory Guided Dataset Distillation with Balanced Distribution
PositiveArtificial Intelligence
The recent introduction of Trajectory Guided Dataset Distillation (TGDD) aims to enhance dataset distillation by reformulating distribution matching as a dynamic alignment process throughout the model's training trajectory. This method captures evolving semantics by aligning feature distributions between synthetic and original datasets, while also implementing a distribution constraint to minimize class overlap.