LDC: Learning to Generate Research Idea with Dynamic Control

arXiv — cs.CLMonday, November 17, 2025 at 5:00:00 AM
  • A new framework for generating research ideas using large language models has been proposed, addressing the limitations of existing methods that often fail to align with expert standards. This framework employs a two
  • This development is significant as it aims to improve the quality of research ideas by balancing key dimensions such as novelty, feasibility, and effectiveness, which are crucial for high
  • Although there are no directly related articles, the framework's focus on enhancing research ideation through advanced AI techniques reflects ongoing trends in the field of artificial intelligence and its applications in research.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Silenced Biases: The Dark Side LLMs Learned to Refuse
NegativeArtificial Intelligence
Safety-aligned large language models (LLMs) are increasingly used in sensitive applications where fairness is crucial. Evaluating their fairness is complex, often relying on standard question-answer schemes that may misinterpret refusal responses as indicators of fairness. This paper introduces the concept of silenced biases, which are unfair preferences hidden within the models' latent space, masked by safety-alignment. Previous methods have limitations, prompting the need for a new approach to assess these biases effectively.
Fair In-Context Learning via Latent Concept Variables
PositiveArtificial Intelligence
The paper titled 'Fair In-Context Learning via Latent Concept Variables' explores the in-context learning (ICL) capabilities of large language models (LLMs) and their potential biases when applied to tabular data. It emphasizes an optimal demonstration selection method that leverages latent concept variables to enhance task adaptation while promoting fairness. The study introduces data augmentation strategies aimed at minimizing correlations between sensitive variables and predictive outcomes, ultimately striving for equitable predictions.
Thinker: Training LLMs in Hierarchical Thinking for Deep Search via Multi-Turn Interaction
PositiveArtificial Intelligence
The article presents Thinker, a hierarchical thinking model designed to enhance the reasoning capabilities of large language models (LLMs) through multi-turn interactions. Unlike previous methods that relied on end-to-end reinforcement learning without supervision, Thinker allows for a more structured reasoning process by breaking down complex problems into manageable sub-problems. Each sub-problem is represented in both natural language and logical functions, improving the coherence and rigor of the reasoning process.
Preference Orchestrator: Prompt-Aware Multi-Objective Alignment for Large Language Models
PositiveArtificial Intelligence
The article introduces the PReference Orchestrator (PRO), a framework designed to enhance the alignment of Large Language Models (LLMs) with diverse human preferences across multiple objectives. Traditional methods rely on manually set preference weights, which can hinder training efficiency and complicate user experience. PRO addresses these challenges by utilizing a lightweight preference adapter that automatically infers prompt-specific preference weights during both training and deployment, thereby improving performance and efficiency.
Modeling and Predicting Multi-Turn Answer Instability in Large Language Models
NeutralArtificial Intelligence
The paper titled 'Modeling and Predicting Multi-Turn Answer Instability in Large Language Models' discusses the evaluation of large language models (LLMs) in terms of their robustness during user interactions. The study employs multi-turn follow-up prompts to assess changes in model answers and accuracy dynamics using Markov chains. Results indicate vulnerabilities in LLMs, with a 10% accuracy drop for Gemini 1.5 Flash after a 'Think again' prompt over nine turns, and a 7.5% drop for Claude 3.5 Haiku with a reworded question. The findings suggest that accuracy can be modeled over time.
Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models
PositiveArtificial Intelligence
The paper titled 'Pre-Attention Expert Prediction and Prefetching for Mixture-of-Experts Large Language Models' introduces a method to enhance the efficiency of Mixture-of-Experts (MoE) Large Language Models (LLMs). The authors propose a pre-attention expert prediction technique that improves accuracy and reduces computational overhead by utilizing activations before the attention block. This approach aims to optimize expert prefetching, achieving about a 15% improvement in accuracy over existing methods.
A Multifaceted Analysis of Negative Bias in Large Language Models through the Lens of Parametric Knowledge
NeutralArtificial Intelligence
A recent study published on arXiv examines the phenomenon of negative bias in large language models (LLMs), which refers to their tendency to generate negative responses in binary decision tasks. The research highlights that previous studies have primarily focused on identifying negative attention heads that contribute to this bias. The authors introduce a new evaluation pipeline that categorizes responses based on the model's parametric knowledge, revealing that the format of prompts significantly influences the responses more than the semantics of the content itself.
Multimodal Peer Review Simulation with Actionable To-Do Recommendations for Community-Aware Manuscript Revisions
PositiveArtificial Intelligence
A new interactive web-based system for multimodal peer review simulation has been introduced, aimed at enhancing manuscript revisions prior to submission. This system leverages large language models (LLMs) to integrate textual and visual information, improving the quality of reviews through retrieval-augmented generation (RAG) based on OpenReview data. It converts generated reviews into actionable to-do lists, providing structured guidance for authors and seamlessly integrating with existing academic writing platforms.