Peregrine: One-Shot Fine-Tuning for FHE Inference of General Deep CNNs

arXiv — cs.CVTuesday, November 25, 2025 at 5:00:00 AM
  • The recent paper titled 'Peregrine: One-Shot Fine-Tuning for FHE Inference of General Deep CNNs' addresses key challenges in adapting deep convolutional neural networks (CNNs) for fully homomorphic encryption (FHE) inference. It introduces a single-stage fine-tuning strategy and a generalized interleaved packing scheme to enhance the performance of CNNs while maintaining accuracy and supporting high-resolution image processing.
  • This development is significant as it enables efficient FHE inference across various CNN architectures, potentially transforming how sensitive data is processed in secure environments. By minimizing training overhead and maximizing compatibility, it opens new avenues for deploying deep learning models in privacy-sensitive applications.
  • The advancements in fine-tuning and encryption compatibility reflect a growing trend in AI research towards optimizing models for resource-constrained environments. This aligns with ongoing efforts to enhance model efficiency and robustness, particularly in the context of adversarial training and dataset pruning, highlighting the importance of developing compact yet powerful neural networks for practical applications.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
A Highly Efficient Diversity-based Input Selection for DNN Improvement Using VLMs
PositiveArtificial Intelligence
A recent study has introduced Concept-Based Diversity (CBD), a highly efficient metric for image inputs that utilizes Vision-Language Models (VLMs) to enhance the performance of Deep Neural Networks (DNNs) through improved input selection. This approach addresses the computational intensity and scalability issues associated with traditional diversity-based selection methods.
Explaning with trees: interpreting CNNs using hierarchies
PositiveArtificial Intelligence
A new framework called xAiTrees has been introduced to enhance the interpretability of Convolutional Neural Networks (CNNs) by utilizing hierarchical segmentation techniques. This method aims to provide faithful explanations of neural network reasoning, addressing challenges faced by existing explainable AI (xAI) methods like Integrated Gradients and LIME, which often produce noisy or misleading outputs.
NOVAK: Unified adaptive optimizer for deep neural networks
PositiveArtificial Intelligence
The recent introduction of NOVAK, a unified adaptive optimizer for deep neural networks, combines several advanced techniques including adaptive moment estimation and lookahead synchronization, aiming to enhance the performance and efficiency of neural network training.
When Models Know When They Do Not Know: Calibration, Cascading, and Cleaning
PositiveArtificial Intelligence
A recent study titled 'When Models Know When They Do Not Know: Calibration, Cascading, and Cleaning' proposes a universal training-free method for model calibration, cascading, and data cleaning, enhancing models' ability to recognize their limitations. The research highlights that higher confidence correlates with higher accuracy and that models calibrated on validation sets maintain their calibration on test sets.
Hierarchical Online-Scheduling for Energy-Efficient Split Inference with Progressive Transmission
PositiveArtificial Intelligence
A novel framework named ENACHI has been proposed for hierarchical online scheduling in energy-efficient split inference with Deep Neural Networks (DNNs), addressing the inefficiencies in current scheduling methods that fail to optimize both task-level decisions and packet-level dynamics. This framework integrates a two-tier Lyapunov-based approach and progressive transmission techniques to enhance adaptivity and resource utilization.
Sesame Plant Segmentation Dataset: A YOLO Formatted Annotated Dataset
PositiveArtificial Intelligence
A new dataset, the Sesame Plant Segmentation Dataset, has been introduced, featuring 206 training images, 43 validation images, and 43 test images formatted for YOLO segmentation. This dataset focuses on sesame plants at early growth stages, captured under various environmental conditions in Nigeria, and annotated with the Segment Anything Model version 2.
The Role of Noisy Data in Improving CNN Robustness for Image Classification
PositiveArtificial Intelligence
A recent study highlights the importance of data quality in enhancing the robustness of convolutional neural networks (CNNs) for image classification, specifically through the introduction of controlled noise during training. Utilizing the CIFAR-10 dataset, the research demonstrates that incorporating just 10% noisy data can significantly reduce test loss and improve accuracy under corrupted conditions without adversely affecting performance on clean data.
AIMC-Spec: A Benchmark Dataset for Automatic Intrapulse Modulation Classification under Variable Noise Conditions
NeutralArtificial Intelligence
A new benchmark dataset named AIMC-Spec has been introduced to enhance automatic intrapulse modulation classification (AIMC) in radar signal analysis, particularly under varying noise conditions. This dataset includes 33 modulation types across 13 signal-to-noise ratio levels, addressing a significant gap in standardized datasets for this critical task.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about