Large Language Models for Sentiment Analysis to Detect Social Challenges: A Use Case with South African Languages

arXiv — cs.CLMonday, November 24, 2025 at 5:00:00 AM
  • Recent research has explored the application of large language models (LLMs) for sentiment analysis in South African languages, focusing on their ability to detect social challenges through social media posts. The study specifically evaluates the zero-shot performance of models like GPT-3.5, GPT-4, LlaMa 2, PaLM 2, and Dolly 2 in analyzing sentiment polarities across topics in English, Sepedi, and Setswana.
  • This development is significant as it enables government departments to identify and address social issues more effectively by leveraging advanced AI technologies. The ability to analyze sentiment in multiple languages can enhance understanding of public opinion and improve responsiveness to community needs.
  • The findings contribute to ongoing discussions about the role of AI in social sciences, particularly in multilingual contexts. The effectiveness of LLMs in diverse linguistic settings raises questions about their adaptability and accuracy, especially in low-resource languages, and highlights the importance of addressing potential biases and limitations in AI models.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Prompt-Based Clarity Evaluation and Topic Detection in Political Question Answering
NeutralArtificial Intelligence
A recent study has focused on the automatic evaluation of large language model (LLM) responses in political question-answering, emphasizing the importance of clarity alongside factual correctness. Utilizing the CLARITY dataset from the SemEval 2026 shared task, the research compares the performance of GPT-3.5 and GPT-5.2 under various prompting strategies, revealing that GPT-5.2 significantly outperforms its predecessor in clarity prediction.
Incentivizing Multi-Tenant Split Federated Learning for Foundation Models at the Network Edge
PositiveArtificial Intelligence
A novel Price-Incentive Mechanism (PRINCE) has been proposed to enhance Multi-Tenant Split Federated Learning (SFL) for Foundation Models (FMs) like GPT-4, enabling efficient fine-tuning on resource-constrained devices while maintaining privacy. This mechanism addresses the coordination challenges faced by multiple SFL tenants with diverse fine-tuning needs.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about