D2M: A Decentralized, Privacy-Preserving, Incentive-Compatible Data Marketplace for Collaborative Learning

arXiv — cs.LGFriday, December 12, 2025 at 5:00:00 AM
  • A decentralized data marketplace named D2M has been introduced, aiming to enhance collaborative machine learning by integrating federated learning, blockchain arbitration, and economic incentives into a single framework. This approach addresses the limitations of existing methods, such as the reliance on trusted aggregators in federated learning and the computational challenges faced by blockchain systems.
  • The significance of D2M lies in its potential to facilitate secure and privacy-preserving data sharing, enabling data buyers to engage in bid-based requests through blockchain smart contracts. This innovation could transform how data is utilized in machine learning, promoting a more collaborative and efficient ecosystem.
  • The development of D2M reflects a growing trend towards decentralized solutions in the AI field, particularly as concerns around data privacy and security intensify. This aligns with ongoing research into enhancing model robustness and addressing issues like class uncertainty and noisy labels, indicating a broader shift towards more resilient and privacy-focused machine learning frameworks.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Sample-wise Adaptive Weighting for Transfer Consistency in Adversarial Distillation
PositiveArtificial Intelligence
A new approach called Sample-wise Adaptive Adversarial Distillation (SAAD) has been proposed to enhance adversarial robustness in neural networks by reweighting training examples based on their transferability. This method addresses the issue of robust saturation, where stronger teacher networks do not necessarily lead to more robust student networks, and aims to improve the effectiveness of adversarial training without incurring additional computational costs.
AEBNAS: Strengthening Exit Branches in Early-Exit Networks through Hardware-Aware Neural Architecture Search
PositiveArtificial Intelligence
AEBNAS introduces a hardware-aware Neural Architecture Search (NAS) framework designed to enhance early-exit networks, which optimize energy consumption and latency in deep learning models by allowing for intermediate exit branches based on input complexity. This approach aims to balance efficiency and performance, particularly for resource-constrained devices.
Empirical evaluation of the Frank-Wolfe methods for constructing white-box adversarial attacks
NeutralArtificial Intelligence
The empirical evaluation of Frank-Wolfe methods for constructing white-box adversarial attacks highlights the need for efficient adversarial attack construction in neural networks, particularly focusing on numerical optimization techniques. The study emphasizes the application of modified Frank-Wolfe methods to enhance the robustness of neural networks against adversarial threats, utilizing datasets like MNIST and CIFAR-10 for testing.
LiePrune: Lie Group and Quantum Geometric Dual Representation for One-Shot Structured Pruning of Quantum Neural Networks
PositiveArtificial Intelligence
A new framework named LiePrune has been introduced for one-shot structured pruning of Quantum Neural Networks (QNNs), utilizing Lie group structures and quantum geometric information to enhance scalability and performance. This innovative approach allows for significant parameter reduction while maintaining or improving task performance across various quantum applications, including classification and generative modeling.
Entropy-Informed Weighting Channel Normalizing Flow for Deep Generative Models
PositiveArtificial Intelligence
A new approach called Entropy-Informed Weighting Channel Normalizing Flow (EIW-Flow) has been introduced to enhance Normalizing Flows (NFs) in deep generative models. This method incorporates a regularized, feature-dependent Shuffle operation that adaptively generates channel-wise weights and shuffles latent variables, improving the expressiveness of multi-scale architectures while guiding variable evolution towards increased entropy.
DeDe Protocol: Trustless Settlement Layer for Physical Delivery
PositiveArtificial Intelligence
DeDe Protocol has been introduced as a minimal, Ethereum-based solution designed for trustless physical delivery, enabling the creation of peer-to-peer delivery networks without the need for central control or intermediaries. This protocol focuses on providing a programmable settlement layer that facilitates delivery confirmation and escrow services, addressing gaps in existing systems like Brazil's Pix, which offers instant payments but lacks these features.
Discovering Influential Factors in Variational Autoencoders
NeutralArtificial Intelligence
A recent study has focused on the influential factors extracted by variational autoencoders (VAEs), highlighting the challenge of supervising learned representations without manual intervention. The research emphasizes the role of mutual information between inputs and learned factors as a key indicator for identifying influential factors, revealing that some factors may be non-influential and can be disregarded in data reconstruction.
Nonlinear Optimization with GPU-Accelerated Neural Network Constraints
NeutralArtificial Intelligence
A new reduced-space formulation for optimizing trained neural networks has been proposed, which evaluates the network's outputs and derivatives on a GPU. This method treats the neural network as a 'gray box,' leading to faster solves and fewer iterations compared to traditional full-space formulations. The approach has been demonstrated on two optimization problems, including adversarial generation for a classifier trained on MNIST images.

Ready to build your own newsroom?

Subscribe once and get a personalised feed, podcast, newsletter, and notifications tuned to the topics you actually care about.