How Linguistics Learned to Stop Worrying and Love the Language Models

arXiv — cs.CLThursday, November 13, 2025 at 5:00:00 AM
The article discusses the evolving relationship between linguistics and language models (LMs), which can generate fluent and grammatical text. While some argue that LMs do not truly learn language and that their success diminishes the need for linguistic theory, the authors contend that both views are incorrect. They assert that LMs can significantly contribute to understanding linguistic structure, language processing, and learning. Furthermore, LMs challenge traditional arguments in linguistics, prompting a reevaluation of foundational concepts. The authors present an optimistic perspective on how LMs can serve as model systems and proofs of concept for gradient and usage-based approaches to language.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Evaluation of OpenAI o1: Opportunities and Challenges of AGI
PositiveArtificial Intelligence
This study evaluates OpenAI's o1-preview large language model, highlighting its performance across various complex reasoning tasks in fields such as computer science, mathematics, and medicine. The model achieved a success rate of 83.3% in competitive programming, excelled in generating radiology reports, and demonstrated 100% accuracy in high school-level math tasks. Its advanced natural language inference capabilities further underscore its potential in diverse applications.
On the Entropy Calibration of Language Models
NeutralArtificial Intelligence
The paper examines entropy calibration in language models, focusing on whether their entropy aligns with log loss on human text. Previous studies indicated that as text generation lengthens, entropy increases while text quality declines, highlighting a fundamental issue in autoregressive models. The authors investigate whether miscalibration can improve with scale and if calibration without tradeoffs is theoretically feasible, analyzing the scaling behavior concerning dataset size and power law exponents.
Studies with impossible languages falsify LMs as models of human language
NeutralArtificial Intelligence
A study published on arXiv examines the learning capabilities of infants and language models (LMs) regarding attested versus impossible languages. The research indicates that both groups find attested languages easier to learn than those with unnatural structures. However, the findings reveal that LMs can learn many impossible languages as effectively as attested ones. The study suggests that the complexity of these languages, rather than their impossibility, contributes to the challenges faced by LMs, which lack the human inductive biases essential for language acquisition.
Are language models rational? The case of coherence norms and belief revision
NeutralArtificial Intelligence
The paper titled 'Are language models rational? The case of coherence norms and belief revision' explores the application of rationality norms, specifically coherence norms, to language models. It distinguishes between logical coherence norms and those related to the strength of belief. The authors introduce the Minimal Assent Connection (MAC), a new framework for understanding credence in language models based on internal token probabilities. The findings suggest that while some language models adhere to these rational norms, others do not, raising important questions about AI behavior and safety.