Peregrine: One-Shot Fine-Tuning for FHE Inference of General Deep CNNs

arXiv — cs.CVTuesday, November 25, 2025 at 5:00:00 AM
  • The recent paper titled 'Peregrine: One-Shot Fine-Tuning for FHE Inference of General Deep CNNs' addresses key challenges in adapting deep convolutional neural networks (CNNs) for fully homomorphic encryption (FHE) inference. It introduces a single-stage fine-tuning strategy and a generalized interleaved packing scheme to enhance the performance of CNNs while maintaining accuracy and supporting high-resolution image processing.
  • This development is significant as it enables efficient FHE inference across various CNN architectures, potentially transforming how sensitive data is processed in secure environments. By minimizing training overhead and maximizing compatibility, it opens new avenues for deploying deep learning models in privacy-sensitive applications.
  • The advancements in fine-tuning and encryption compatibility reflect a growing trend in AI research towards optimizing models for resource-constrained environments. This aligns with ongoing efforts to enhance model efficiency and robustness, particularly in the context of adversarial training and dataset pruning, highlighting the importance of developing compact yet powerful neural networks for practical applications.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Annotation-Free Class-Incremental Learning
PositiveArtificial Intelligence
A new paradigm in continual learning, Annotation-Free Class-Incremental Learning (AFCIL), has been introduced, addressing the challenge of learning from unlabeled data that arrives sequentially. This approach allows systems to adapt to new classes without supervision, marking a significant shift from traditional methods reliant on labeled data.
Understanding, Accelerating, and Improving MeanFlow Training
PositiveArtificial Intelligence
Recent advancements in MeanFlow training have clarified the dynamics between instantaneous and average velocity fields, revealing that effective learning of average velocity relies on the prior establishment of accurate instantaneous velocities. This understanding has led to the design of a new training scheme that accelerates the formation of these velocities, enhancing the overall training process.
Temporal-adaptive Weight Quantization for Spiking Neural Networks
PositiveArtificial Intelligence
A new study introduces Temporal-adaptive Weight Quantization (TaWQ) for Spiking Neural Networks (SNNs), which aims to reduce energy consumption while maintaining accuracy. This method leverages temporal dynamics to allocate ultra-low-bit weights, demonstrating minimal quantization loss of 0.22% on ImageNet and high energy efficiency in extensive experiments.
In Search of Goodness: Large Scale Benchmarking of Goodness Functions for the Forward-Forward Algorithm
PositiveArtificial Intelligence
The Forward-Forward (FF) algorithm presents a biologically plausible alternative to traditional backpropagation in neural networks, focusing on local updates through a scalar measure of 'goodness'. Recent benchmarking of 21 distinct goodness functions across four standard image datasets revealed that certain alternatives significantly outperform the conventional sum-of-squares metric, with notable accuracy improvements on datasets like MNIST and FashionMNIST.
BD-Net: Has Depth-Wise Convolution Ever Been Applied in Binary Neural Networks?
PositiveArtificial Intelligence
A recent study introduces BD-Net, which successfully applies depth-wise convolution in Binary Neural Networks (BNNs) by proposing a 1.58-bit convolution and a pre-BN residual connection to enhance expressiveness and stabilize training. This innovation marks a significant advancement in model compression techniques, achieving a new state-of-the-art performance on ImageNet with MobileNet V1 and outperforming previous methods across various datasets.
Flow Map Distillation Without Data
PositiveArtificial Intelligence
A new approach to flow map distillation has been introduced, which eliminates the need for external datasets traditionally used in the sampling process. This method aims to mitigate the risks associated with Teacher-Data Mismatch by relying solely on the prior distribution, ensuring that the teacher's generative capabilities are accurately represented without data dependency.
DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation
PositiveArtificial Intelligence
The newly proposed DeCo framework introduces a frequency-decoupled pixel diffusion method for end-to-end image generation, addressing the inefficiencies of existing models that combine high and low-frequency signal modeling within a single diffusion transformer. This innovation allows for improved training and inference speeds by separating the generation processes of high-frequency details and low-frequency semantics.
HyM-UNet: Synergizing Local Texture and Global Context via Hybrid CNN-Mamba Architecture for Medical Image Segmentation
PositiveArtificial Intelligence
A novel hybrid architecture named HyM-UNet has been proposed to enhance medical image segmentation by combining the local feature extraction strengths of Convolutional Neural Networks (CNNs) with the global modeling capabilities of Mamba. This architecture employs a Hierarchical Encoder and a Mamba-Guided Fusion Skip Connection to effectively bridge local and global features, addressing the limitations of traditional CNNs in capturing complex anatomical structures.