From Black-Box Tuning to Guided Optimization via Hyperparameters Interaction Analysis

arXiv — cs.LGTuesday, December 23, 2025 at 5:00:00 AM
  • A new method called MetaSHAP has been introduced to enhance hyperparameter tuning in machine learning models, utilizing meta-learning and Shapley values analysis to provide insights into hyperparameter interactions and their importance. This semi-automated approach operates over a benchmark of over 9 million evaluated machine learning pipelines, offering actionable insights for model optimization.
  • The development of MetaSHAP is significant as it addresses the computational challenges associated with hyperparameter tuning, enabling more efficient model development and potentially improving the performance of machine learning applications across various domains.
  • This advancement in hyperparameter optimization aligns with ongoing efforts to refine Bayesian optimization techniques, which have been challenged by high-dimensional spaces and the need for more interpretable AI methods. The integration of Shapley values into this context highlights a growing emphasis on explainability in AI, as researchers seek to understand not only the performance of models but also the underlying factors that contribute to their success.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
GADPN: Graph Adaptive Denoising and Perturbation Networks via Singular Value Decomposition
PositiveArtificial Intelligence
A new framework named GADPN has been proposed to enhance Graph Neural Networks (GNNs) by refining graph topology through low-rank denoising and generalized structural perturbation, addressing issues of noise and missing links in graph-structured data.
ROSS: RObust decentralized Stochastic learning based on Shapley values
PositiveArtificial Intelligence
A new decentralized learning algorithm named ROSS has been proposed, which utilizes Shapley values to enhance the robustness of stochastic learning among agents. This approach addresses challenges posed by heterogeneous data distributions, allowing agents to collaboratively learn a global model without a central server. Each agent updates its model by aggregating cross-gradient information from neighboring agents, weighted by their contributions.
Regression-adjusted Monte Carlo Estimators for Shapley Values and Probabilistic Values
PositiveArtificial Intelligence
A new study introduces regression-adjusted Monte Carlo estimators for calculating Shapley values and probabilistic values, enhancing the efficiency of these computations in explainable AI. This method integrates Monte Carlo sampling with linear regression, allowing for the use of various function families, including tree-based models like XGBoost, to produce unbiased estimates.
The Kernel Manifold: A Geometric Approach to Gaussian Process Model Selection
NeutralArtificial Intelligence
A new framework for Gaussian Process (GP) model selection, titled 'The Kernel Manifold', has been introduced, emphasizing a geometric approach to optimize the choice of covariance kernels. This method utilizes a Bayesian optimization framework based on kernel-of-kernels geometry, allowing for efficient exploration of kernel space through expected divergence-based distances.

Ready to build your own newsroom?

Subscribe to unlock a personalised feed, podcasts, newsletters, and notifications tailored to the topics you actually care about