Transolver is a Linear Transformer: Revisiting Physics-Attention through the Lens of Linear Attention
PositiveArtificial Intelligence
Transolver represents a pivotal development in the realm of data-driven solvers for Partial Differential Equations (PDEs), addressing the challenge of quadratic complexity in Transformer-based Neural Operators. By introducing Physics-Attention, it aims to enhance efficiency in training and inference. However, recent observations suggest that the effectiveness of Physics-Attention may stem more from its slice and deslice operations rather than the interactions between slices. This insight led to the proposal of the Linear Attention Neural Operator (LinearNO), which reformulates Physics-Attention into a canonical linear attention framework. The LinearNO not only achieves state-of-the-art performance on six standard PDE benchmarks but also significantly reduces the number of parameters by an average of 40% and computational costs by 36.2%. Furthermore, it demonstrates superior performance on two challenging industrial-level datasets, reinforcing its potential impact on the field of AI and…
— via World Pulse Now AI Editorial System