RAG-HAR: Retrieval Augmented Generation-based Human Activity Recognition

arXiv — cs.CVThursday, December 11, 2025 at 5:00:00 AM
  • RAG-HAR introduces a novel framework for Human Activity Recognition (HAR) that utilizes Retrieval Augmented Generation (RAG) and large language models (LLMs) to enhance activity identification without the need for extensive training datasets. This approach computes lightweight statistical descriptors and retrieves semantically similar samples to improve accuracy across six HAR benchmarks.
  • The significance of RAG-HAR lies in its ability to streamline the HAR process, making it more accessible for applications in healthcare, rehabilitation, and smart environments, while reducing reliance on large labeled datasets and computational resources.
  • This development reflects a growing trend in AI towards leveraging retrieval-augmented methods and LLMs to enhance various applications, including music-related question answering and multilingual information retrieval, highlighting the versatility and potential of these technologies in addressing complex challenges across different fields.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
SCOPE: Language Models as One-Time Teacher for Hierarchical Planning in Text Environments
PositiveArtificial Intelligence
A new framework called SCOPE has been introduced to enhance long-term planning in complex text-based environments by utilizing large language models (LLMs) as one-time teachers for hierarchical planning. This approach aims to mitigate the computational costs associated with querying LLMs during training and inference, allowing for more efficient deployment. SCOPE leverages LLM-generated subgoals only at initialization, addressing the limitations of fixed parameter models.
Interpreto: An Explainability Library for Transformers
PositiveArtificial Intelligence
Interpreto has been launched as a Python library aimed at enhancing the explainability of text models developed by HuggingFace, including BERT and various large language models (LLMs). This library offers two main types of explanations: attributions and concept-based explanations, making it a valuable tool for data scientists seeking to provide clarity on model decisions.
Advancing Text Classification with Large Language Models and Neural Attention Mechanisms
PositiveArtificial Intelligence
A new study has introduced a text classification algorithm utilizing large language models and neural attention mechanisms, addressing traditional methods' limitations in capturing long-range dependencies and contextual semantics. The framework involves text encoding, attention-based enhancements, and classification predictions, optimizing model parameters through cross-entropy loss.
CourtPressGER: A German Court Decision to Press Release Summarization Dataset
NeutralArtificial Intelligence
A new dataset named CourtPressGER has been introduced, consisting of 6.4k triples that include judicial rulings, human-drafted press releases, and synthetic prompts for large language models (LLMs). This dataset aims to enhance the generation of readable summaries from complex judicial texts, addressing the communication needs of the public and experts alike.
Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification
PositiveArtificial Intelligence
Recent advancements in counterfactual explanations for text classification have been introduced, focusing on guiding Large Language Models (LLMs) to generate high-fidelity outputs without the need for task-specific fine-tuning. This approach enhances the quality of counterfactuals, which are crucial for model interpretability.
MindShift: Analyzing Language Models' Reactions to Psychological Prompts
NeutralArtificial Intelligence
A recent study introduced MindShift, a benchmark for evaluating large language models' (LLMs) psychological adaptability, utilizing the Minnesota Multiphasic Personality Inventory (MMPI) to assess how well LLMs can reflect user-specified personality traits through tailored prompts. The findings indicate significant improvements in LLMs' role perception due to advancements in training datasets and alignment techniques.
Don't Throw Away Your Beams: Improving Consistency-based Uncertainties in LLMs via Beam Search
PositiveArtificial Intelligence
A new study has introduced methods utilizing beam search to enhance consistency-based uncertainty quantification in large language models (LLMs), addressing issues with multinomial sampling that often leads to duplicates and high variance in uncertainty estimates. The research demonstrates improved performance across six question-answering datasets, establishing a theoretical lower bound for beam search effectiveness.
Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs
NeutralArtificial Intelligence
Recent research highlights the vulnerabilities of large language models (LLMs) to corruption through fine-tuning and inductive backdoors. Experiments demonstrated that minor adjustments in specific contexts can lead to significant behavioral shifts, such as adopting outdated knowledge or personas, exemplified by a model mimicking Hitler's biography. This raises concerns about the reliability and safety of LLMs in diverse applications.