M-DAIGT: A Shared Task on Multi-Domain Detection of AI-Generated Text

arXiv — cs.CLMonday, November 17, 2025 at 5:00:00 AM
The paper introduces the Multi-Domain Detection of AI-Generated Text (M-DAIGT) shared task, aimed at identifying AI-generated text across various domains, especially in news and academic writing. It features two binary classification subtasks: News Article Detection (NAD) and Academic Writing Detection (AWD). A new benchmark dataset of 30,000 samples, balanced between human-written and AI-generated texts, was developed. The task attracted 46 unique teams, with four teams submitting final results.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Silenced Biases: The Dark Side LLMs Learned to Refuse
NegativeArtificial Intelligence
Safety-aligned large language models (LLMs) are increasingly used in sensitive applications where fairness is crucial. Evaluating their fairness is complex, often relying on standard question-answer schemes that may misinterpret refusal responses as indicators of fairness. This paper introduces the concept of silenced biases, which are unfair preferences hidden within the models' latent space, masked by safety-alignment. Previous methods have limitations, prompting the need for a new approach to assess these biases effectively.
Fair In-Context Learning via Latent Concept Variables
PositiveArtificial Intelligence
The paper titled 'Fair In-Context Learning via Latent Concept Variables' explores the in-context learning (ICL) capabilities of large language models (LLMs) and their potential biases when applied to tabular data. It emphasizes an optimal demonstration selection method that leverages latent concept variables to enhance task adaptation while promoting fairness. The study introduces data augmentation strategies aimed at minimizing correlations between sensitive variables and predictive outcomes, ultimately striving for equitable predictions.
Disney star debuts AI avatars of the dead
NeutralArtificial Intelligence
Disney star has introduced AI avatars representing deceased individuals, marking a significant development in the intersection of entertainment and artificial intelligence. This debut showcases the potential of AI technology to create lifelike representations of those who have passed away, raising questions about ethics and the future of digital personas. The event took place on November 17, 2025, and is expected to attract attention from both fans and industry experts alike.
Preference Orchestrator: Prompt-Aware Multi-Objective Alignment for Large Language Models
PositiveArtificial Intelligence
The article introduces the PReference Orchestrator (PRO), a framework designed to enhance the alignment of Large Language Models (LLMs) with diverse human preferences across multiple objectives. Traditional methods rely on manually set preference weights, which can hinder training efficiency and complicate user experience. PRO addresses these challenges by utilizing a lightweight preference adapter that automatically infers prompt-specific preference weights during both training and deployment, thereby improving performance and efficiency.
Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models
PositiveArtificial Intelligence
The paper titled 'Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models' introduces a method to enhance the efficiency of Mixture-of-Experts (MoE) Large Language Models (LLMs). The authors propose a pre-attention expert prediction technique that improves accuracy and reduces computational overhead by utilizing activations before the attention block. This approach aims to optimize expert prefetching, achieving about a 15% improvement in accuracy over existing methods.
A Multifaceted Analysis of Negative Bias in Large Language Models through the Lens of Parametric Knowledge
NeutralArtificial Intelligence
A recent study published on arXiv examines the phenomenon of negative bias in large language models (LLMs), which refers to their tendency to generate negative responses in binary decision tasks. The research highlights that previous studies have primarily focused on identifying negative attention heads that contribute to this bias. The authors introduce a new evaluation pipeline that categorizes responses based on the model's parametric knowledge, revealing that the format of prompts significantly influences the responses more than the semantics of the content itself.
Modeling and Predicting Multi-Turn Answer Instability in Large Language Models
NeutralArtificial Intelligence
The paper titled 'Modeling and Predicting Multi-Turn Answer Instability in Large Language Models' discusses the evaluation of large language models (LLMs) in terms of their robustness during user interactions. The study employs multi-turn follow-up prompts to assess changes in model answers and accuracy dynamics using Markov chains. Results indicate vulnerabilities in LLMs, with a 10% accuracy drop for Gemini 1.5 Flash after a 'Think again' prompt over nine turns, and a 7.5% drop for Claude 3.5 Haiku with a reworded question. The findings suggest that accuracy can be modeled over time.
LAET: A Layer-wise Adaptive Ensemble Tuning Framework for Pretrained Language Models
PositiveArtificial Intelligence
The paper titled 'LAET: A Layer-wise Adaptive Ensemble Tuning Framework for Pretrained Language Models' introduces a novel method for fine-tuning large language models (LLMs) in the financial sector. This method, called Layer-wise Adaptive Ensemble Tuning (LAET), selectively fine-tunes effective layers while freezing less critical ones, significantly reducing computational demands. The approach aims to enhance task-specific performance in financial NLP tasks, addressing accessibility issues faced by many organizations.