PRISM: Privacy-preserving Inference System with Homomorphic Encryption and Modular Activation

arXiv — cs.LGWednesday, November 12, 2025 at 5:00:00 AM
The PRISM framework represents a significant advancement in the intersection of machine learning and data privacy. As machine learning models become more prevalent in critical infrastructures, concerns about data privacy have escalated, hindering the unrestricted sharing of sensitive information. Homomorphic encryption (HE) offers a potential solution by allowing computations on encrypted data, yet its compatibility with machine learning models, particularly convolutional neural networks (CNNs), has been limited due to the reliance on non-linear activation functions. The proposed PRISM framework addresses this challenge by restructuring the CNN architecture and introducing homomorphically compatible approximations for standard non-linear functions. This innovative approach not only ensures secure computations but also minimizes the computational overhead typically associated with encryption. In experiments conducted on the CIFAR-10 dataset, PRISM achieved an impressive accuracy of 94.4…
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
Enhanced Structured Lasso Pruning with Class-wise Information
PositiveArtificial Intelligence
The paper titled 'Enhanced Structured Lasso Pruning with Class-wise Information' discusses advancements in neural network pruning methods. Traditional pruning techniques often overlook class-wise information, leading to potential loss of statistical data. This study introduces two new pruning schemes, sparse graph-structured lasso pruning with Information Bottleneck (sGLP-IB) and sparse tree-guided lasso pruning with Information Bottleneck (sTLP-IB), aimed at preserving statistical information while reducing model complexity.
Robust inverse material design with physical guarantees using the Voigt-Reuss Net
PositiveArtificial Intelligence
A new method for mechanical homogenization has been proposed, utilizing a spectrally normalized surrogate that incorporates physical guarantees. This approach leverages the Voigt-Reuss bounds and employs a Cholesky-like operator to create a symmetric positive semi-definite representation. The method has been tested on a dataset of stochastic biphasic microstructures, achieving near-perfect fidelity in isotropic projections with R² values exceeding 0.998. The median relative Frobenius error was approximately 1.7%.
Neural Network-Powered Finger-Drawn Biometric Authentication
PositiveArtificial Intelligence
A recent study published on arXiv investigates the use of neural networks for biometric authentication through finger-drawn digits on touchscreen devices. The research involved twenty participants who contributed a total of 2,000 finger-drawn digits. Two CNN architectures were evaluated, achieving approximately 89% authentication accuracy, while autoencoder approaches reached about 75% accuracy. The findings suggest that this method offers a secure and user-friendly biometric solution that can be integrated with existing authentication systems.
AMUN: Adversarial Machine UNlearning
PositiveArtificial Intelligence
The paper titled 'AMUN: Adversarial Machine UNlearning' discusses a novel method for machine unlearning, which allows users to delete specific datasets to comply with privacy regulations. Traditional exact unlearning methods require significant computational resources, while approximate methods have not achieved satisfactory accuracy. The proposed Adversarial Machine UNlearning (AMUN) technique enhances model performance by fine-tuning on adversarial examples, effectively reducing model confidence on forgotten samples while maintaining accuracy on test datasets.
Orthogonal Soft Pruning for Efficient Class Unlearning
PositiveArtificial Intelligence
The article discusses FedOrtho, a federated unlearning framework designed to enhance data unlearning in federated learning environments. It addresses the challenges of balancing forgetting and retention, particularly in non-IID settings. FedOrtho employs orthogonalized deep convolutional kernels and a one-shot soft pruning mechanism, achieving state-of-the-art performance on datasets like CIFAR-10 and TinyImageNet, with over 98% forgetting quality and 97% retention accuracy.
CNN-Enabled Scheduling for Probabilistic Real-Time Guarantees in Industrial URLLC
PositiveArtificial Intelligence
The article discusses an enhancement to the Local Deadline Partition (LDP) algorithm for ultra-reliable, low-latency communications (URLLC) in industrial wireless networks. A Convolutional Neural Network (CNN) is introduced to dynamically predict link priorities, improving interference coordination across multi-cell, multi-channel networks. The proposed method shows significant gains in Signal-to-Interference-plus-Noise Ratio (SINR), achieving up to 113%, 94%, and 49% improvements in different network configurations, thus enhancing resource allocation and network capacity.
YCB-Ev SD: Synthetic event-vision dataset for 6DoF object pose estimation
PositiveArtificial Intelligence
The YCB-Ev SD dataset has been introduced as a synthetic collection of event-camera data aimed at enhancing 6DoF object pose estimation. Comprising 50,000 event sequences, each lasting 34 ms, the dataset is generated from Physically Based Rendering (PBR) scenes of YCB-Video objects. This initiative addresses the lack of comprehensive resources in event-based vision, employing a methodology aligned with the Benchmark for 6D Object Pose (BOP) to improve pose estimation performance through advanced encoding techniques.
On the Necessity of Output Distribution Reweighting for Effective Class Unlearning
PositiveArtificial Intelligence
The paper titled 'On the Necessity of Output Distribution Reweighting for Effective Class Unlearning' identifies a critical flaw in class unlearning evaluations, specifically the neglect of class geometry, which can lead to privacy breaches. It introduces a membership-inference attack via nearest neighbors (MIA-NN) to identify unlearned samples. The authors propose a new fine-tuning objective that adjusts the model's output distribution to mitigate privacy risks, demonstrating that existing unlearning methods are susceptible to MIA-NN across various datasets.