SLAM&Render: A Benchmark for the Intersection Between Neural Rendering, Gaussian Splatting and SLAM
PositiveArtificial Intelligence
On November 13, 2025, the SLAM&Render dataset was introduced to bridge the gap between Simultaneous Localization and Mapping (SLAM) and neural rendering techniques, such as Neural Radiance Fields and Gaussian Splatting. Existing datasets often overlook the specific challenges faced in these fields, including sequential operations and multi-modality in SLAM. The SLAM&Render dataset uniquely includes 40 sequences with time-synchronized RGB-D images, IMU readings, and robot kinematic data, recorded using a robot manipulator. This allows researchers to benchmark methods effectively, addressing the limitations of previous datasets. The dataset features five setups with consumer and industrial objects under controlled lighting conditions, ensuring diverse testing scenarios. By providing ground-truth pose streams and enabling the assessment of SLAM paradigms, SLAM&Render is poised to significantly advance the integration of these technologies in robotics and computer vision.
— via World Pulse Now AI Editorial System
