Fusion of LiDAR and HDR Imaging in Autonomous Vehicles: A Multi-Modal Deep Learning Approach for Safer Navigation
Main Article Content
Abstract
The growing interest in driverless vehicles, calls for improved safety and navigation for these systems, especially in challenging and hidden conditions. In this paper, we propose a new multi-modal deep learning model that combines LiDAR and HDR imagery for enhanced perception. LiDAR offers accurate 3D spatial data, HDR imaging provides high quality visual information under challenging lighting. The proposed method adopts a two-stream neural network architecture including the CNN (for spatial feature extraction) and RNN (for temporal sequence analysis) to fuse LiDAR point clouds and HDR images at the feature level. The model is first trained and validated on a custom dataset covering various real-world driving conditions, such as lighting and weather.
The quantitative results demonstrate that the proposed fusion model is able to achieve 91% object detection accuracy and reduce the MAE of distance estimation to 0.33 meters, which outperforms LiDAR-only model by 9% in detection accuracy and HDR-only model by 16%. The method also shows the improvement of obstacle detection and avoidance in low light or fog. These results verify the beneficial effect of sensor fusion on enhancing perception robustness and navigation safety, paving a solid way for further development in autonomous driving techniques.