Evolutionary Retrofitting

arXiv — cs.LGMonday, November 17, 2025 at 5:00:00 AM
The article discusses AfterLearnER (After Learning Evolutionary Retrofitting), a method that applies evolutionary optimization to enhance fully trained machine learning models. This process involves optimizing selected parameters or hyperparameters based on non-differentiable error signals from a subset of the validation set. The effectiveness of AfterLearnER is showcased through various applications, including depth sensing, speech re-synthesis, and image generation. This retrofitting can occur post-training or dynamically during inference, incorporating user feedback.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
A Critical Study of Automatic Evaluation in Sign Language Translation
NeutralArtificial Intelligence
A recent study published on arXiv investigates the effectiveness of automatic evaluation metrics in sign language translation (SLT). Current metrics like BLEU and ROUGE are text-based, raising questions about their reliability in assessing SLT outputs. The study analyzes six metrics, including BLEU, chrF, and ROUGE, alongside LLM-based evaluators such as G-Eval and GEMBA. It assesses these metrics under controlled conditions, revealing limitations in lexical overlap metrics and highlighting the advantages of LLM-based evaluators in capturing semantic equivalence.
Model Class Selection
NeutralArtificial Intelligence
The article discusses the concept of Model Class Selection (MCS), which extends the traditional model selection framework. Unlike classical model selection that identifies a single optimal model, MCS aims to find a set of near-optimal models. The study highlights the effectiveness of data splitting approaches in providing general solutions for MCS and explores the performance of simpler statistical models compared to complex machine learning models through various simulated and real-data experiments.