Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves a mAP metric of 49.5% on the Valeo fisheye camera dataset for autonomous driving. Currently, this is the first study on target detection algorithms based on fisheye cameras in autonomous driving scenarios.
Article link: https://arxiv.org/pdf/2404.13443.pdf
Our network structure is built on the YOLOv3 network Based on the model, various representations of bounding boxes, rotated bounding boxes, ellipses, polygons, etc. are performed. To enable the network to be ported to low-power automotive hardware, we use ResNet18 as the encoder. Compared with the standard Darknet53 encoder, the parameters are reduced by more than 60%. The proposed network architecture is shown in the figure below.
Our bounding box model is the same as YOLOv3, except that the Darknet53 encoder is replaced with a ResNet18 encoder. Similar to YOLOv3, object detection is performed at multiple scales. For each grid in each scale, predict object width (), height (), object center coordinates (,), and object class. Finally, non-maximum suppression is used to filter redundant detections.
In this model, the orientation of the box is regressed along with the regular box information (,,,). Directional ground truth range (-180 to 180°) normalized between -1 and 1.
Ellipse regression is the same as oriented box regression. The only difference is the output representation. So the loss function is also the same as the directed box loss.
Our proposed polygon-based instance segmentation method is very similar to the PolarMask and PolyYOLO methods. Instead of using sparse polygon points and single-scale predictions like PolyYOLO. We use dense polygon annotation and multi-scale prediction.
We evaluated the Valeo fisheye dataset, which has 60K images. The images were captured from 4 surround-view cameras in Europe, North America and Asia.
Each model is compared using the average precision metric (mAP) with an IoU threshold of 50%. The results are shown in the table below. Each algorithm is evaluated based on two criteria—identical performance and instance segmentation performance.
The above is the detailed content of FisheyeDetNet: the first target detection algorithm based on fisheye camera. For more information, please follow other related articles on the PHP Chinese website!