Randomized Masked Finetuning: An Efficient Way to Mitigate Memorization of PIIs in LLMs

arXiv — cs.CLThursday, December 4, 2025 at 5:00:00 AM
  • A new technique called Randomized Masked Fine-Tuning (RMFT) has been introduced to address the memorization of personally identifying information (PIIs) in large language models (LLMs). This method significantly reduces PII memorization while maintaining model performance, achieving an 80.81% reduction in Total Extraction Rate using the Enron Email Dataset.
  • The development of RMFT is crucial as it enhances privacy protection in LLMs, which are increasingly utilized in various applications. By minimizing the risk of PII exposure, RMFT contributes to safer AI deployment in sensitive contexts.
  • This innovation is part of a broader discourse on the security and ethical implications of LLMs, particularly regarding their vulnerability to adversarial attacks and the challenges posed by off-policy training data. As the field evolves, balancing privacy and utility remains a key concern among researchers and practitioners.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
NLP Datasets for Idiom and Figurative Language Tasks
NeutralArtificial Intelligence
A new paper on arXiv presents datasets aimed at improving the understanding of idiomatic and figurative language in Natural Language Processing (NLP). These datasets are designed to assist large language models (LLMs) in better interpreting informal language, which has become increasingly prevalent in social media and everyday communication.
ZIP-RC: Optimizing Test-Time Compute via Zero-Overhead Joint Reward-Cost Prediction
PositiveArtificial Intelligence
The recent introduction of ZIP-RC, an adaptive inference method, aims to optimize test-time compute for large language models (LLMs) by enabling zero-overhead joint reward-cost prediction. This innovation addresses the limitations of existing test-time scaling methods, which often lead to increased costs and latency due to fixed sampling budgets and a lack of confidence signals.
Reconstructing KV Caches with Cross-layer Fusion For Enhanced Transformers
PositiveArtificial Intelligence
Researchers have introduced FusedKV, a novel approach to reconstructing key-value (KV) caches in transformer models, enhancing their efficiency by fusing information from bottom and middle layers. This method addresses the significant memory demands of KV caches during long sequence processing, which has been a bottleneck in transformer performance. Preliminary findings indicate that this fusion retains essential positional information without the computational burden of rotary embeddings.
A Group Fairness Lens for Large Language Models
PositiveArtificial Intelligence
A recent study introduces a group fairness lens for evaluating large language models (LLMs), proposing a novel hierarchical schema to assess bias and fairness. The research presents the GFAIR dataset and introduces GF-THINK, a method aimed at mitigating biases in LLMs, highlighting the critical need for broader evaluations of these models beyond traditional metrics.
AugServe: Adaptive Request Scheduling for Augmented Large Language Model Inference Serving
PositiveArtificial Intelligence
AugServe has been introduced as an adaptive request scheduling framework aimed at enhancing the efficiency of augmented large language model (LLM) inference services. This framework addresses significant challenges such as head-of-line blocking and static batch token limits, which have hindered effective throughput and service quality in existing systems.
Text-Printed Image: Bridging the Image-Text Modality Gap for Text-centric Training of Large Vision-Language Models
PositiveArtificial Intelligence
Recent advancements in large vision-language models (LVLMs) have led to the proposal of a Text-Printed Image (TPI) approach, which aims to bridge the image-text modality gap by utilizing only textual descriptions for training. This method addresses the challenges of collecting image-text pairs, which can be costly and restricted by privacy concerns.
Which Type of Students can LLMs Act? Investigating Authentic Simulation with Graph-based Human-AI Collaborative System
PositiveArtificial Intelligence
Recent advancements in large language models (LLMs) have highlighted their potential in simulating student behavior, addressing a significant challenge in educational data collection and intervention design. A new three-stage LLM-human collaborative pipeline has been developed to generate and filter high-quality student agents, utilizing automated scoring and expert calibration to enhance realism in simulations.
Finetune-RAG: Fine-Tuning Language Models to Resist Hallucination in Retrieval-Augmented Generation
PositiveArtificial Intelligence
A new framework named Finetune-RAG has been introduced to enhance the factual accuracy of large language models (LLMs) by addressing the issue of hallucinations that arise from imperfect information retrieval in Retrieval-Augmented Generation (RAG). Experimental results indicate a 21.2% improvement in factual accuracy over the base model, alongside the introduction of Bench-RAG, an evaluation pipeline designed to test models under realistic conditions.