In-N-Out: A Parameter-Level API Graph Dataset for Tool Agents

arXiv — cs.CLThursday, November 20, 2025 at 5:00:00 AM
  • In
  • The introduction of In
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Deterministic RAG: A Drop-in Replacement for GraphRAG’s Unstable Planning
PositiveArtificial Intelligence
The article discusses the development of a deterministic RAG (Retrieval-Augmented Generation) system designed to replace GraphRAG's unstable planning. Current RAG systems face issues with reproducibility and debugging due to their reliance on LLM-driven dynamic planning. The new deterministic approach aims to enhance stability and auditability while maintaining the system's generative capabilities.
ToDRE: Effective Visual Token Pruning via Token Diversity and Task Relevance
PositiveArtificial Intelligence
ToDRE is a new framework designed for effective visual token pruning in large vision-language models (LVLMs). It emphasizes the importance of visual token diversity and task relevance, proposing a two-stage, training-free approach that utilizes a greedy max-sum diversification algorithm. This method aims to enhance inference efficiency by selecting a representative subset of visual tokens rather than simply removing redundant ones.
Fairshare Data Pricing via Data Valuation for Large Language Models
PositiveArtificial Intelligence
The paper discusses the exploitative pricing practices in data markets for large language models (LLMs), which often marginalize data providers. It proposes a fairshare pricing mechanism based on data valuation to enhance seller participation and improve data quality. The framework aims to align incentives between buyers and sellers, ensuring optimal outcomes for both parties while maintaining market sustainability.
Unveiling Intrinsic Dimension of Texts: from Academic Abstract to Creative Story
NeutralArtificial Intelligence
The study on intrinsic dimension (ID) in large language models (LLMs) reveals its significance in understanding text properties. It highlights that ID is uncorrelated with entropy-based metrics, indicating a distinct measure of geometric complexity. The research also shows genre stratification in ID, with scientific texts having lower ID compared to creative writing, suggesting that LLMs perceive scientific text as simpler. This work utilizes cross-encoder analysis and sparse autoencoders for its findings.
Confidential Prompting: Privacy-preserving LLM Inference on Cloud
PositiveArtificial Intelligence
The paper presents a concept called confidential prompting, aimed at securing user prompts from untrusted cloud-hosted large language models (LLMs). It introduces Petridish, a system utilizing confidential computing and a technology named Secure Partitioned Decoding (SPD). Petridish operates within a confidential virtual machine (CVM) to protect LLM parameters and user prompts from external threats, while efficiently managing user requests through a dual-process system.
ReflexGrad: Three-Way Synergistic Architecture for Zero-Shot Generalization in LLM Agents
PositiveArtificial Intelligence
ReflexGrad is a new architecture designed to enhance zero-shot generalization in large language model (LLM) agents. It integrates three mechanisms: hierarchical TODO decomposition for strategic planning, history-aware causal reflection for identifying failure causes, and gradient-based optimization for systematic improvement. This approach allows agents to learn from experiences without needing task-specific training, marking a significant advancement in reinforcement learning and decision-making.
Encoding and Understanding Astrophysical Information in Large Language Model-Generated Summaries
NeutralArtificial Intelligence
Large Language Models (LLMs) have shown remarkable capabilities in generalizing across various domains and modalities. This study explores their potential to encode astrophysical information typically derived from scientific measurements. The research focuses on two primary questions: the impact of prompting on the codification of physical quantities by LLMs and the linguistic aspects crucial for encoding the physics represented by these measurements. Sparse autoencoders are utilized to extract interpretable features from the text.
MalRAG: A Retrieval-Augmented LLM Framework for Open-set Malicious Traffic Identification
PositiveArtificial Intelligence
MalRAG is a novel retrieval-augmented framework designed for the fine-grained identification of open-set malicious traffic in cybersecurity. As cyber threats continuously evolve, the ability to detect both known and new types of malicious traffic is paramount. This framework utilizes a frozen large language model (LLM) to construct a comprehensive traffic knowledge database, employing adaptive retrieval and prompt engineering techniques to enhance identification capabilities.