LLMs choose friends and colleagues like people, researchers find

Tech Xplore — AI & MLTuesday, December 2, 2025 at 2:46:19 PM
LLMs choose friends and colleagues like people, researchers find
  • Researchers have found that large language models (LLMs) make decisions about networking and friendship in ways that closely resemble human behavior, both in synthetic simulations and real-world contexts. This suggests that LLMs can replicate social decision-making processes similar to those of people.
  • This development is significant as it indicates that LLMs are not just tools for processing language but can also engage in complex social interactions, potentially enhancing their applications in areas such as social networking and collaborative work environments.
  • The findings highlight ongoing discussions about the capabilities and limitations of LLMs, particularly regarding their alignment with human values and decision-making processes. While some studies show LLMs can replicate human cooperation, others raise concerns about their reliability and fairness, indicating a need for further research and refinement in their design.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
AI’s Wrong Answers Are Bad. Its Wrong Reasoning Is Worse
NegativeArtificial Intelligence
Recent studies reveal that while AI, particularly generative AI, has improved in accuracy, its flawed reasoning processes pose significant risks in critical sectors such as healthcare, law, and education. These findings highlight the need for a deeper understanding of AI's decision-making mechanisms.
An Interdisciplinary and Cross-Task Review on Missing Data Imputation
NeutralArtificial Intelligence
A comprehensive review on missing data imputation highlights the challenges posed by incomplete datasets across various fields, including healthcare and e-commerce. The study synthesizes decades of research, categorizing imputation methods from classical techniques to modern machine learning approaches, emphasizing the need for a unified framework to address missingness mechanisms and imputation goals.
Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling
PositiveArtificial Intelligence
A new quantization method called Four Over Six (4/6) has been introduced to enhance the NVFP4 quantization algorithm, which is crucial for large language models (LLMs). This method evaluates two potential scale factors for each block of values, addressing issues of performance degradation during inference and divergence during training that arise from quantization errors in floating-point formats.
Escaping Collapse: The Strength of Weak Data for Large Language Model Training
PositiveArtificial Intelligence
Recent research has formalized the role of synthetically-generated data in training large language models (LLMs), highlighting that without proper curation, model performance can plateau or collapse. The study introduces a theoretical framework to determine the necessary curation levels to ensure continuous improvement in LLM performance, drawing inspiration from the boosting technique in machine learning.
MoH: Multi-Head Attention as Mixture-of-Head Attention
PositiveArtificial Intelligence
A new architecture called Mixture-of-Head attention (MoH) has been proposed to enhance the efficiency of the multi-head attention mechanism, a key component of the Transformer model. This innovation allows tokens to selectively utilize attention heads, improving inference efficiency while maintaining or exceeding previous accuracy levels. MoH replaces the standard summation in multi-head attention with a weighted summation, introducing flexibility and unlocking additional performance potential.
InnoGym: Benchmarking the Innovation Potential of AI Agents
PositiveArtificial Intelligence
InnoGym has been introduced as the first benchmark and framework aimed at systematically evaluating the innovation potential of AI agents. This initiative focuses on two key metrics: performance gain and novelty, assessing not just the correctness of solutions but also the originality of approaches across 18 tasks from real-world engineering and scientific domains.
Agentic Policy Optimization via Instruction-Policy Co-Evolution
PositiveArtificial Intelligence
A novel framework named INSPO has been introduced to enhance reinforcement learning through dynamic instruction optimization, addressing the limitations of static instructions in Reinforcement Learning with Verifiable Rewards (RLVR). This approach allows for a more adaptive learning process, where instruction candidates evolve alongside the agent's policy, improving multi-turn reasoning capabilities in large language models (LLMs).
Predicting the Performance of Black-box LLMs through Follow-up Queries
PositiveArtificial Intelligence
A recent study has demonstrated a method for predicting the performance of black-box language models (LLMs) by utilizing follow-up queries to assess their outputs. This approach allows researchers to train reliable predictors based on the probabilities of responses, achieving accuracy that can surpass traditional white-box models that analyze internal mechanisms.