Invisible Triggers, Visible Threats! Road-Style Adversarial Creation Attack for Visual 3D Detection in Autonomous Driving

arXiv — cs.CVMonday, November 17, 2025 at 5:00:00 AM
  • Recent research has highlighted the vulnerabilities of modern autonomous driving systems, particularly their susceptibility to adversarial examples in 3D object detection using RGB cameras. The introduction of AdvRoad aims to address these issues by generating diverse and realistic road
  • This development is significant as it underscores the ongoing safety concerns in autonomous driving technology, emphasizing the need for improved defenses against adversarial attacks to ensure the reliability of these systems in real
  • While there are no directly related articles, the focus on adversarial attacks in AI and their implications for safety in autonomous driving reflects a broader trend in the field, highlighting the critical need for ongoing research and development in protective measures against such vulnerabilities.
— via World Pulse Now AI Editorial System

Was this article worth reading? Share it

Recommended Readings
MMEdge: Accelerating On-device Multimodal Inference via Pipelined Sensing and Encoding
PositiveArtificial Intelligence
MMEdge is a new framework designed for real-time multimodal inference on resource-constrained edge devices, crucial for applications like autonomous driving and mobile health. The framework addresses the challenges of sensing dynamics and model execution by decomposing the inference process into fine-grained sensing and encoding units. This allows for incremental computation as data arrives, while a lightweight temporal aggregation module captures rich temporal dynamics to maintain accuracy.
Understanding World or Predicting Future? A Comprehensive Survey of World Models
NeutralArtificial Intelligence
The article discusses the growing interest in world models, particularly in light of advancements in multimodal large language models like GPT-4 and video generation models such as Sora. World models serve two main purposes: understanding current world mechanisms and predicting future dynamics. The review categorizes these models and highlights their applications in various fields, including generative games, autonomous driving, robotics, and social simulacra, emphasizing their role in decision-making processes.
Learning with Preserving for Continual Multitask Learning
PositiveArtificial Intelligence
The article discusses a novel framework called Learning with Preserving (LwP) designed for Continual Multitask Learning (CMTL) in artificial intelligence systems. CMTL involves models that learn new tasks sequentially without forgetting previously acquired skills, which is crucial in fields like autonomous driving and medical imaging. Traditional methods often struggle due to task-specific feature fragmentation. LwP focuses on maintaining the geometric structure of shared representation spaces, enhancing the model's ability to learn continuously.
Adaptive LiDAR Scanning: Harnessing Temporal Cues for Efficient 3D Object Detection via Multi-Modal Fusion
PositiveArtificial Intelligence
The article discusses a novel adaptive LiDAR scanning framework that enhances 3D object detection by utilizing temporal cues from past observations. Traditional LiDAR sensors often perform redundant scans, leading to inefficiencies in data acquisition and power consumption. The proposed method employs a lightweight predictor network to identify regions of interest, significantly reducing unnecessary data collection and improving overall efficiency.
One-to-N Backdoor Attack in 3D Point Cloud via Spherical Trigger
PositiveArtificial Intelligence
Backdoor attacks pose a significant risk to deep learning systems, especially in critical 3D applications like autonomous driving and robotics. This study introduces a novel one-to-N backdoor framework for 3D vision, utilizing a configurable spherical trigger. The research demonstrates that a single trigger can effectively encode multiple target classes, achieving high attack success rates of up to 100% while preserving accuracy on clean data.
CATS-V2V: A Real-World Vehicle-to-Vehicle Cooperative Perception Dataset with Complex Adverse Traffic Scenarios
PositiveArtificial Intelligence
The CATS-V2V dataset introduces a pioneering real-world collection for Vehicle-to-Vehicle (V2V) cooperative perception, aimed at enhancing autonomous driving in complex adverse traffic scenarios. Collected using two time-synchronized vehicles, the dataset encompasses 100 clips featuring 60,000 frames of LiDAR point clouds and 1.26 million multi-view camera images across various weather and lighting conditions. This dataset is expected to significantly benefit the autonomous driving community by providing high-quality data for improved perception capabilities.
FQ-PETR: Fully Quantized Position Embedding Transformation for Multi-View 3D Object Detection
PositiveArtificial Intelligence
The paper titled 'FQ-PETR: Fully Quantized Position Embedding Transformation for Multi-View 3D Object Detection' addresses the challenges of deploying PETR models in autonomous driving due to their high computational costs and memory requirements. It introduces FQ-PETR, a fully quantized framework that aims to enhance efficiency without sacrificing accuracy. Key innovations include a Quantization-Friendly LiDAR-ray Position Embedding and techniques to mitigate accuracy degradation typically associated with quantization methods.
FQ-PETR: Fully Quantized Position Embedding Transformation for Multi-View 3D Object Detection
PositiveArtificial Intelligence
The paper titled 'FQ-PETR: Fully Quantized Position Embedding Transformation for Multi-View 3D Object Detection' addresses the challenges of deploying PETR models in autonomous driving due to their high computational costs and memory requirements. It introduces FQ-PETR, a fully quantized framework that aims to enhance performance while maintaining accuracy. The proposed innovations include a Quantization-Friendly LiDAR-ray Position Embedding and improvements in quantizing non-linear operators, which are critical for effective multi-view 3D detection.