State of the Art in Text Classification for South Slavic Languages: Fine-Tuning or Prompting?
NeutralArtificial Intelligence
The study titled 'State of the Art in Text Classification for South Slavic Languages: Fine-Tuning or Prompting?' highlights a shift in text classification methodologies, focusing on the performance of language models in less-resourced South Slavic languages. It compares fine-tuned BERT-like models with LLMs across tasks like sentiment classification, topic classification, and genre identification. The findings reveal that LLMs can perform comparably to BERT-like models in zero-shot settings, indicating their potential for broader applications. However, the study also emphasizes the limitations of LLMs, including slower inference times and higher computational costs, which can hinder their practical use. As a result, fine-tuned BERT-like models remain a more viable option for large-scale automatic text annotation, balancing performance with efficiency.
— via World Pulse Now AI Editorial System
