LabelFusion: Learning to Fuse LLMs and Transformer Classifiers for Robust Text Classification

arXiv — cs.CLFriday, December 12, 2025 at 5:00:00 AM
  • LabelFusion has been introduced as a novel fusion ensemble for text classification, combining traditional transformer-based classifiers like RoBERTa with Large Language Models (LLMs) such as OpenAI GPT and Google Gemini. This approach aims to enhance the accuracy and cost-effectiveness of predictions across multi-class and multi-label tasks by integrating embeddings and per-class scores into a multi-layer perceptron for final predictions.
  • This development is significant as it leverages the strengths of both LLM reasoning and traditional classifiers, potentially improving the performance of text classification tasks in various applications, including news categorization and sentiment analysis.
  • The emergence of LabelFusion reflects a growing trend in AI towards integrating diverse model architectures to address challenges in natural language processing, such as class imbalance and the need for reliable outputs in complex scenarios. This trend is underscored by ongoing research into parameter-efficient fine-tuning methods and the necessity for curated contexts in LLM applications.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
The 2025 Foundation Model Transparency Index
NegativeArtificial Intelligence
The 2025 Foundation Model Transparency Index reveals a significant decline in transparency among foundation model developers, with the average score dropping from 58 in 2024 to 40 in 2025. This index evaluates companies like Alibaba, DeepSeek, and xAI for the first time, highlighting their opacity regarding training data and model usage.
When Reject Turns into Accept: Quantifying the Vulnerability of LLM-Based Scientific Reviewers to Indirect Prompt Injection
NeutralArtificial Intelligence
A recent study has examined the vulnerability of Large Language Model (LLM)-based scientific reviewers to indirect prompt injection, focusing on the potential to alter peer review decisions from 'Reject' to 'Accept'. This research introduces a new metric, the Weighted Adversarial Vulnerability Score (WAVS), and evaluates 15 attack strategies across 13 LLMs, including GPT-5 and DeepSeek, using a dataset of 200 scientific papers.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about