Using Text-Based Life Trajectories from Swedish Register Data to Predict Residential Mobility with Pretrained Transformers

arXiv — cs.LGWednesday, December 10, 2025 at 5:00:00 AM
  • A recent study has transformed extensive Swedish register data into textual life trajectories to predict residential mobility, utilizing data from 6.9 million individuals between 2001 and 2013. By converting demographic and life changes into semantically rich texts, the research employs various NLP architectures, including LSTM and BERT, to enhance prediction accuracy for residential moves from 2013 to 2017.
  • This development is significant as it addresses challenges in data analysis, particularly the high cardinality of categorical variables and inconsistencies in coding schemes, thereby improving the predictive capabilities of machine learning models in understanding human behavior over time.
  • The integration of textualized data in predictive modeling reflects a growing trend in artificial intelligence, where the focus is on enhancing the interpretability and effectiveness of models. This approach not only aids in residential mobility predictions but also aligns with broader efforts in the AI community to develop benchmarks for monitoring behavioral changes in various contexts, such as dementia care.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Shape and Texture Recognition in Large Vision-Language Models
NeutralArtificial Intelligence
The Large Shapes and Textures dataset (LAS&T) has been introduced to enhance the capabilities of Large Vision-Language Models (LVLMs) in recognizing and representing shapes and textures across various contexts. This dataset, created through unsupervised extraction from natural images, serves as a benchmark for evaluating the performance of leading models like CLIP and DINO in shape recognition tasks.
Beyond Wave Variables: A Data-Driven Ensemble Approach for Enhanced Teleoperation Transparency and Stability
PositiveArtificial Intelligence
A new study introduces a data-driven ensemble approach to enhance transparency and stability in bilateral teleoperation systems, addressing challenges posed by communication delays. The framework replaces traditional wave-variable methods with advanced sequence models, including LSTM and CNN-LSTM, optimized through the Optuna algorithm. Experimental validation was conducted using Python, demonstrating the effectiveness of this innovative approach.
A Hybrid Model for Stock Market Forecasting: Integrating News Sentiment and Time Series Data with Graph Neural Networks
PositiveArtificial Intelligence
A recent study introduces a hybrid model for stock market forecasting that integrates news sentiment and time series data using Graph Neural Networks (GNNs). This approach contrasts with traditional models that primarily rely on historical price data, aiming to enhance prediction accuracy by incorporating external signals from financial news articles. The GNN model was evaluated against a baseline Long Short-Term Memory (LSTM) model, demonstrating superior performance in predicting stock price movements.
Language Models for Controllable DNA Sequence Design
PositiveArtificial Intelligence
Researchers have introduced ATGC-Gen, an Automated Transformer Generator designed for controllable DNA sequence design, which generates sequences based on specific biological properties. This model utilizes cross-modal encoding and can operate under various transformer architectures, enhancing its flexibility in training and generation tasks, particularly in promoter and enhancer sequence design.
LUNA: Linear Universal Neural Attention with Generalization Guarantees
PositiveArtificial Intelligence
A new linear attention mechanism named LUNA has been introduced, addressing the computational bottleneck of traditional softmax attention, which operates at a quadratic cost. LUNA achieves linear cost while maintaining or exceeding the accuracy of quadratic attention by learning the kernel feature map tailored to specific data and tasks.
Emergent Granger Causality in Neural Networks: Can Prediction Alone Reveal Structure?
NeutralArtificial Intelligence
A novel approach to Granger Causality (GC) using deep neural networks (DNNs) has been proposed, focusing on the joint modeling of multivariate time series data. This method aims to enhance the understanding of complex associations that traditional vector autoregressive models struggle to capture, particularly in non-linear contexts.
QL-LSTM: A Parameter-Efficient LSTM for Stable Long-Sequence Modeling
NeutralArtificial Intelligence
The introduction of the Quantum-Leap LSTM (QL-LSTM) addresses significant limitations in traditional recurrent neural architectures like LSTM and GRU, particularly in managing long sequences and reducing redundant parameters. This new architecture employs a Parameter-Shared Unified Gating mechanism and a Hierarchical Gated Recurrence with Additive Skip Connections to enhance performance while decreasing the number of parameters by approximately 48 percent.
In-Context and Few-Shots Learning for Forecasting Time Series Data based on Large Language Models
PositiveArtificial Intelligence
A recent study has explored the application of Large Language Models (LLMs) for forecasting time series data, particularly focusing on Google's TimesFM model. The research highlights the potential of LLMs to surpass traditional methods like LSTM and TCN in predictive accuracy, utilizing in-context learning techniques to enhance model performance.