Learn to Select: Exploring Label Distribution Divergence for In-Context Demonstration Selection in Text Classification
PositiveArtificial Intelligence
The article discusses a novel approach to in-context learning (ICL) for text classification, emphasizing the importance of selecting appropriate demonstrations. Traditional methods often prioritize semantic similarity, neglecting label distribution alignment, which can impact performance. The proposed method, TopK + Label Distribution Divergence (L2D), utilizes a fine-tuned BERT-like small language model to generate label distributions and assess their divergence. This dual focus aims to enhance the effectiveness of demonstration selection in large language models (LLMs).
— via World Pulse Now AI Editorial System
