Word-level Annotation of GDPR Transparency Compliance in Privacy Policies using Large Language Models

arXiv — cs.CLTuesday, November 25, 2025 at 5:00:00 AM
  • A new study presents a modular large language model (LLM)-based pipeline for word-level annotation of privacy policies, focusing on compliance with GDPR transparency requirements. This approach aims to address the challenges of manual audits and the limitations of existing automated methods by providing fine-grained, context-aware annotations across 21 transparency requirements.
  • This development is significant as it enhances the ability to assess GDPR compliance efficiently, potentially reducing the labor-intensive nature of manual audits and improving the consistency of compliance evaluations in privacy policies.
  • The introduction of LLMs in this context reflects a broader trend in leveraging advanced AI technologies to tackle complex regulatory challenges, highlighting ongoing discussions about the balance between innovation in AI and the need for robust data protection frameworks, particularly as the EU seeks to streamline its data regulations.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Multi-Agent Collaborative Filtering: Orchestrating Users and Items for Agentic Recommendations
PositiveArtificial Intelligence
The Multi-Agent Collaborative Filtering (MACF) framework has been proposed to enhance agentic recommendations by utilizing large language model (LLM) agents that can interact with users and suggest relevant items based on collaborative signals from user-item interactions. This approach aims to improve the effectiveness of recommendation systems beyond traditional single-agent workflows.
Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation
NeutralArtificial Intelligence
A recent study has highlighted the issue of over-refusal in large language models (LLMs), which occurs when these models excessively decline to generate outputs due to safety concerns. The research proposes a new approach called MOSR, which aims to balance safety and usability by addressing the representation of safety in LLMs.
MURMUR: Using cross-user chatter to break collaborative language agents in groups
NegativeArtificial Intelligence
A recent study introduces MURMUR, a framework that reveals vulnerabilities in collaborative language agents through cross-user poisoning (CUP) attacks. These attacks exploit the lack of isolation in user interactions within multi-user environments, allowing adversaries to manipulate shared states and trigger unintended actions by the agents. The research validates these attacks on popular multi-user systems, highlighting a significant security concern in the evolving landscape of AI collaboration.
Beyond Multiple Choice: Verifiable OpenQA for Robust Vision-Language RFT
PositiveArtificial Intelligence
A new framework called ReVeL (Rewrite and Verify by LLM) has been proposed to enhance the multiple-choice question answering (MCQA) format used in evaluating multimodal language models. This framework transforms MCQA into open-form questions while ensuring answers remain verifiable, addressing issues of answer guessing and unreliable accuracy metrics during reinforcement fine-tuning (RFT).
FanarGuard: A Culturally-Aware Moderation Filter for Arabic Language Models
PositiveArtificial Intelligence
A new moderation filter named FanarGuard has been introduced, designed specifically for Arabic language models. This bilingual filter assesses both safety and cultural alignment in Arabic and English, utilizing a dataset of over 468,000 prompt-response pairs evaluated by human raters. The development aims to address the shortcomings of existing moderation systems that often neglect cultural nuances.
From Competition to Coordination: Market Making as a Scalable Framework for Safe and Aligned Multi-Agent LLM Systems
PositiveArtificial Intelligence
A new market-making framework for coordinating multi-agent large language model (LLM) systems has been introduced, addressing challenges in trustworthiness and accountability as these models interact as agents. This framework enables agents to trade probabilistic beliefs, aligning local incentives with collective goals to achieve truthful outcomes without external enforcement.
For Those Who May Find Themselves on the Red Team
NeutralArtificial Intelligence
A recent position paper emphasizes the need for literary scholars to engage with research on large language model (LLM) interpretability, suggesting that the red team could serve as a platform for this ideological struggle. The paper argues that current interpretability standards are insufficient for evaluating LLMs.
OpenAI now lets enterprises choose where to host their data
PositiveArtificial Intelligence
OpenAI has expanded its data residency options for enterprise users of ChatGPT and its API, allowing them to store and process data in various regions, including Europe, the UK, and the US. This move aims to enhance compliance with local regulations and facilitate broader adoption of AI technologies in business operations.