A crucial part of safe navigation of autonomous vehicles is the robust detection of surrounding objects. While there are numerous approaches covering object detection in images and LiDAR point clouds, this paper addresses the problem of object detection in radar data. For this purpose,
the fully convolutional network YOLOv3 is adapted to operate on sparse radar point clouds. In order to apply convolutions, the point cloud is transformed into a grid-like structure. The impact of this representation transformation is shown by comparison with a network based on Frustum PointNets, which directly processes point cloud data. The presented networks are trained and evaluated on the public nuScenes dataset.
While experiments show that the point cloud-based network outperforms the grid-based approach in detection accuracy, the latter has a significantly faster inference time which is crucial for applications like autonomous driving.
«
A crucial part of safe navigation of autonomous vehicles is the robust detection of surrounding objects. While there are numerous approaches covering object detection in images and LiDAR point clouds, this paper addresses the problem of object detection in radar data. For this purpose,
the fully convolutional network YOLOv3 is adapted to operate on sparse radar point clouds. In order to apply convolutions, the point cloud is transformed into a grid-like structure. The impact of this representat...
»