High-Throughput Unsupervised Profiling of the Morphology of 316L Powder Particles for Use in Additive Manufacturing

arXiv — cs.CVWednesday, December 10, 2025 at 5:00:00 AM
  • A new automated machine learning framework has been developed to profile the morphology of 316L powder particles for Selective Laser Melting (SLM) in additive manufacturing. This approach utilizes high-throughput imaging, shape extraction, and clustering to analyze approximately 126,000 powder images, significantly enhancing the characterization process compared to traditional methods.
  • This advancement is crucial for improving the quality of parts produced through SLM, as the morphology of the feedstock directly impacts the final product's performance. The framework's efficiency allows for rapid analysis, which is essential for industrial-scale applications.
  • The integration of machine learning in material characterization reflects a broader trend in various industries, where data-driven approaches are increasingly employed to enhance product quality and operational efficiency. This shift underscores the importance of advanced analytical techniques in addressing complex challenges in manufacturing and other sectors.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Harnessing AI to solve major roadblock in solid-state battery technology
PositiveArtificial Intelligence
Researchers at Edith Cowan University are leveraging artificial intelligence (AI) and machine learning to enhance the reliability of solid-state batteries, addressing a significant challenge in battery technology. This initiative aims to improve performance and safety in energy storage solutions.
Unsupervised Learning of Density Estimates with Topological Optimization
NeutralArtificial Intelligence
A new paper has been published on arXiv detailing an unsupervised learning approach for density estimation using a topology-based loss function. This method aims to automate the selection of the optimal kernel bandwidth, a critical hyperparameter that influences the bias-variance trade-off in density estimation, particularly in high-dimensional data where visualization is challenging.
Predicting California Bearing Ratio with Ensemble and Neural Network Models: A Case Study from T\"urkiye
PositiveArtificial Intelligence
A study has introduced a machine learning framework for predicting the California Bearing Ratio (CBR) using a dataset of 382 soil samples from various geoclimatic regions in Tükiye. This approach aims to enhance the accuracy and efficiency of CBR determination, which is crucial for assessing the load-bearing capacity of subgrade soils in infrastructure projects.
Reading the immune clock: a machine learning model predicts mouse immune age from cellular patterns
NeutralArtificial Intelligence
A recent study published in Nature — Machine Learning presents a machine learning model capable of predicting the immune age of mice based on cellular patterns. This innovative approach leverages complex data analysis to enhance understanding of immune system aging, potentially leading to advancements in immunology and age-related research.
IFFair: Influence Function-driven Sample Reweighting for Fair Classification
PositiveArtificial Intelligence
A new method called IFFair has been proposed to address biases in machine learning, which can lead to discriminatory outcomes against unprivileged groups. This pre-processing technique utilizes influence functions to dynamically adjust sample weights during training, aiming to enhance fairness without altering the underlying model structure or data features.
Efficient Low-Tubal-Rank Tensor Estimation via Alternating Preconditioned Gradient Descent
NeutralArtificial Intelligence
The recent publication introduces an Alternating Preconditioned Gradient Descent (APGD) algorithm aimed at enhancing low-tubal-rank tensor estimation, a crucial task in high-dimensional signal processing and machine learning. Traditional methods, reliant on tensor singular value decomposition, are computationally intensive and impractical for large tensors, prompting the need for more efficient solutions.
GPU-GLMB: Assessing the Scalability of GPU-Accelerated Multi-Hypothesis Tracking
NeutralArtificial Intelligence
Recent research has focused on the scalability of GPU-accelerated multi-hypothesis tracking, particularly through the Generalized Labeled Multi-Bernoulli (GLMB) filter, which allows for multiple detections per object. This method addresses the computational challenges associated with maintaining multiple hypotheses in multi-target tracking systems, especially in distributed networks of machine learning-based virtual sensors.
Interpretive Efficiency: Information-Geometric Foundations of Data Usefulness
NeutralArtificial Intelligence
A new concept called Interpretive Efficiency has been introduced, which quantifies how effectively data supports interpretive representations in machine learning. This measure is grounded in five axioms and relates to mutual information, providing a framework for assessing the usefulness of data in interpretive tasks.