AutoBrep: Autoregressive B-Rep Generation with Unified Topology and Geometry

arXiv — cs.CVWednesday, December 3, 2025 at 5:00:00 AM
  • A novel Transformer model named AutoBrep has been introduced to generate boundary representations (B-Reps) in Computer-Aided Design (CAD) with high quality and valid topology. This model addresses the challenge of end-to-end generation of B-Reps by employing a unified tokenization scheme that encodes geometric and topological characteristics as discrete tokens, facilitating a breadth-first traversal of the B-Rep face adjacency graph during inference.
  • The development of AutoBrep is significant for advancing CAD technologies, as it enhances the precision and efficiency of solid model generation. This innovation could streamline workflows in design and manufacturing processes, potentially leading to improved productivity and reduced errors in CAD applications.
  • The introduction of AutoBrep aligns with ongoing advancements in AI-driven design tools, reflecting a broader trend towards integrating machine learning techniques in CAD. Similar frameworks, such as MamTiff-CAD and Image2Gcode, also leverage Transformer models to enhance design processes, indicating a growing reliance on AI to tackle complex challenges in parametric design and additive manufacturing.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended apps based on your readingExplore all apps
Continue Readings
Toward Content-based Indexing and Retrieval of Head and Neck CT with Abscess Segmentation
PositiveArtificial Intelligence
A new study has introduced AbscessHeNe, a dataset of 4,926 contrast-enhanced CT slices specifically focused on head and neck abscesses, which are critical for timely diagnosis and treatment. This dataset aims to enhance the development of semantic segmentation models that can accurately identify abscess boundaries and assess deep neck space involvement.
Multimodal LLMs See Sentiment
PositiveArtificial Intelligence
A new framework named MLLMsent has been proposed to enhance the sentiment reasoning capabilities of Multimodal Large Language Models (MLLMs). This framework explores sentiment classification directly from images, sentiment analysis on generated image descriptions, and fine-tuning LLMs on sentiment-labeled descriptions, achieving state-of-the-art results in recent benchmarks.
MoH: Multi-Head Attention as Mixture-of-Head Attention
PositiveArtificial Intelligence
The recent introduction of Mixture-of-Head attention (MoH) enhances the multi-head attention mechanism central to Transformer models, aiming to improve efficiency while maintaining or exceeding previous accuracy levels. This new architecture allows tokens to select relevant attention heads, thereby optimizing inference without increasing parameters.
Capturing Context-Aware Route Choice Semantics for Trajectory Representation Learning
PositiveArtificial Intelligence
A new framework named CORE has been proposed to enhance trajectory representation learning (TRL) by integrating context-aware route choice semantics into trajectory embeddings. This approach addresses the limitations of existing TRL methods that treat trajectories as static sequences, thereby enriching the semantic representation of urban mobility patterns.