From long time camera is used as the sensor for surround view application, but with advent of radar for automotive application especially 77Ghz, there have been implementation of radar for surround view application. Using radar for surround view application not only improves pedestrian detection accuracy but also in classification of other road objects. This blog sheds light on some of the challenges involved in radar-based surround view application development on 77GHz automotive radar. FMCW is commonly used waveform for automotive radar application with its high-resolution measurement.
Challenge with using radar for surround view is signal processing of all the sensors used and correction involved with overlapped regions. The diagram explains the top-level steps involved in addressing these challenges:
Radar signal processing
This radar signal processing algorithm has 2D FFT for range and doppler estimation followed by Constant False Alarm Rate-Order Statistic (CFAR-OS) for adaptive peak detection. Depending on the distance each object will show multiple detection, hence clustering is done with help of DBSCAN which uses their position density and other related information. The Radar Signal Processing also includes a JPDAF data association and linear Kalman based tracking. The output from this stage contains information like range, angle, velocity, width and other related information.
Point Cloud Pre-processing
Surround view involves a minimum of 4 radar sensors to get complete understanding of surrounding. Pre-processing in done in two stages, first being collecting data related to point cloud from all the sensors. The point cloud information from each sensor is received by a separate thread and a common database is updated with time stamp and information about the sensor. The second step in point cloud pre-processing is to provide the transformation for each sensor data with respect to its position and facing angle
Point Cloud Post-processing
The major point cloud processing is the grouping of the same object presented from multiple sensors. In case of an object being present in an overlapping region of two sensors, then the final point cloud will have two points representing the same object. Even though this provides a level of safety, in terms of redundancy, this multiple object detection might lead to confusion and wrong interpretation. To avoid this and also to preserve the extra information, a Bayesian approach to grouping is employed to identify multiple objects which have a higher probability of belonging to the same object. Bayesian approach considers the probability of two objects from two different sensors belonging to the same object. This probability is calculated from the inverse of the distance between the two valid objects. The probability of any point being associated with a point from another sensor is calculated based on Euclidian distance. The velocity information from the sensors cannot be used here for grouping because the radial doppler velocity measured by radar will be different for each sensor based on their position.
The usage of radar for automotive use cases for object detection is on the rise. Along with better radar signal processing algorithms, there is a requirement of post processing of radar point cloud data in certain use cases. Using radar for surround view improves upon the accuracy of pedestrian and other road object detection using quadratic SVM
Click on the link for an article by Team PathPartner to learn more about
“Bayesian Grouping of Multi Sensor Radar Fusion for effective Pedestrian Classification in Automotive Surround View “