Rectified Noise: A Generative Model Using Positive-incentive Noise

arXiv — cs.LGThursday, November 13, 2025 at 5:00:00 AM
The introduction of Rectified Noise (RN) marks a significant advancement in generative modeling, building on the established Rectified Flow (RF) framework. By incorporating Positive-incentive Noise (pi-noise) into the velocity fields of pre-trained RF models, RN enhances generative performance, as evidenced by a notable reduction in the Fréchet Inception Distance (FID) from 10.16 to 9.05 on the ImageNet-1k dataset. This improvement is achieved with minimal additional training requirements, only 0.39% more parameters, showcasing the efficiency of the new approach. Extensive experiments across various model architectures validate the effectiveness of RN, indicating its potential to influence future developments in AI generative models. The findings underscore the importance of innovative noise injection techniques in enhancing model performance, paving the way for more sophisticated applications in the field of artificial intelligence.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
PrivDFS: Private Inference via Distributed Feature Sharing against Data Reconstruction Attacks
PositiveArtificial Intelligence
The paper introduces PrivDFS, a distributed feature-sharing framework designed for input-private inference in image classification. It addresses vulnerabilities in split inference that allow Data Reconstruction Attacks (DRAs) to reconstruct inputs with high fidelity. By fragmenting the intermediate representation and processing these fragments independently across a majority-honest set of servers, PrivDFS limits the reconstruction capability while maintaining accuracy within 1% of non-private methods.
Out-of-Distribution Detection with Positive and Negative Prompt Supervision Using Large Language Models
PositiveArtificial Intelligence
The paper discusses advancements in out-of-distribution (OOD) detection, focusing on the integration of visual and textual modalities through large language models (LLMs). It introduces a method called Positive and Negative Prompt Supervision, which aims to improve OOD detection by using class-specific prompts that capture inter-class features. This approach addresses the limitations of negative prompts that may include non-ID features, potentially leading to suboptimal outcomes.