Creation of the Estonian Subjectivity Dataset: Assessing the Degree of Subjectivity on a Scale

arXiv — cs.CLThursday, December 11, 2025 at 5:00:00 AM
  • The Estonian Subjectivity Dataset has been created to assess document-level subjectivity in the Estonian language, comprising 1,000 documents rated on a scale from 0 (objective) to 100 (subjective) by four annotators. Initial experiments using a large language model (LLM) like GPT-5 for automatic subjectivity analysis showed promising results, although some discrepancies with human annotations were noted.
  • This dataset is significant as it enhances the understanding of subjectivity in Estonian texts, potentially benefiting various applications in natural language processing, sentiment analysis, and automated content evaluation. The findings may also inform future developments in LLMs and their integration into language-specific tasks.
  • The creation of this dataset aligns with ongoing efforts to improve language processing tools for underrepresented languages, such as Estonian and Basque. These initiatives highlight the importance of developing localized datasets that can enhance the performance of AI models in diverse linguistic contexts, addressing challenges in translation and automated scoring systems.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
DeepSeek's WEIRD Behavior: The cultural alignment of Large Language Models and the effects of prompt language and cultural prompting
NeutralArtificial Intelligence
DeepSeek's recent study highlights the cultural alignment of Large Language Models (LLMs), particularly focusing on how prompt language and cultural prompting affect their outputs. The research utilized Hofstede's VSM13 international surveys to analyze the alignment of models like DeepSeek-V3 and OpenAI's GPT-5 with cultural responses from the United States and China, revealing a significant alignment with the U.S. but not with China.
The 70% factuality ceiling: why Google’s new ‘FACTS’ benchmark is a wake-up call for enterprise AI
NeutralArtificial Intelligence
Google has introduced a new benchmark called 'FACTS' aimed at measuring the factual accuracy of generative AI models, addressing a critical gap in existing benchmarks that focus primarily on task completion rather than the truthfulness of the information generated. This initiative is particularly significant for industries where accuracy is essential, such as legal, finance, and medical sectors.
Disrupting Hierarchical Reasoning: Adversarial Protection for Geographic Privacy in Multimodal Reasoning Models
PositiveArtificial Intelligence
A new framework named ReasonBreak has been introduced to address privacy concerns associated with multi-modal large reasoning models (MLRMs), which can infer precise geographic locations from personal images using hierarchical reasoning. This framework employs concept-aware perturbations to disrupt the reasoning processes of MLRMs, aiming to enhance geographic privacy protection.
Automatic Essay Scoring and Feedback Generation in Basque Language Learning
PositiveArtificial Intelligence
A new dataset for Automatic Essay Scoring (AES) and feedback generation in Basque has been introduced, consisting of 3,200 essays annotated by experts. This dataset targets the CEFR C1 proficiency level and includes detailed feedback on various scoring criteria. The study demonstrates that fine-tuning open-source models like Latxa can outperform established systems such as GPT-5 in scoring consistency and feedback quality.
Automating High Energy Physics Data Analysis with LLM-Powered Agents
PositiveArtificial Intelligence
A recent study has demonstrated the potential of large language model (LLM) agents to automate high energy physics data analysis, specifically using the Higgs boson diphoton cross-section measurement as a case study. This hybrid system integrates an LLM-based supervisor-coder agent with the Snakemake workflow manager, allowing for autonomous code generation and execution while ensuring reproducibility and determinism.
Reasoning Models Ace the CFA Exams
PositiveArtificial Intelligence
Recent evaluations of advanced reasoning models on mock Chartered Financial Analyst (CFA) exams have shown impressive results, with models like Gemini 3.0 Pro achieving a record score of 97.6% on Level I. This study involved 980 questions across three levels of the CFA exams, and most models successfully passed all levels, indicating a significant improvement in their performance compared to previous assessments of large language models (LLMs).
OpenAI's New GPT-5.1 Models are Faster and More Conversational
PositiveArtificial Intelligence
OpenAI has launched upgrades to its GPT-5 model, introducing GPT-5.1 Instant for improved instruction following, GPT-5.1 Thinking for faster reasoning, and GPT-5.1-Codex-Max for enhanced coding capabilities. These updates aim to enhance user interaction and response quality in AI applications.