Sensor Fusion for L3 and Beyond

Date: May 22, 2020

Author: Santhana Raj

It is a well-known fact that the different type of sensors used in vehicles today lack the capability to independently enable L3 to L5 automation in the industry. The fusion of multiple sensors for different use cases is the next step forward to achieving a reliable low-cost solution for an entirely autonomous driving experience. This blog gives an overview of the advantages and shortcomings of various sensors and how the fusion of sensors can effectively complement each other to enable L3-L5 autonomous driving capabilities.

Automotive sensors and their comparison

Even after considering the future technological advancements in the sensor domain, there is a considerable gap in the capabilities of any individual sensor to meet the technical challenges for Level 3 - Level 5 autonomous vehicles. Camera, lidar, radar and ultrasound sensors contribute a major part in this environment perception. Firstly, let’s look at some features of these sensors and how they stack up against each other:

FeaturesCameraRadarLiDARUltrasonic
Object detection rangeHighVery HighHighLow
Depth resolutionLow to NilHighHighMedium
Angular positional resolutionHighMediumHighLow
Field of viewHighVery HighVery HighLow

Top Sensor Fusion Applications

Currently, most of the ADAS applications are solved by single sensor solutions. More generally, there is no requirement of heterogeneous sensor fusion for L1-L2 applications. But to meet the criteria of autonomous cars, it is almost impossible to achieve the automotive level of requirements with a single sensor solution.

A complete understanding of surrounding standalone sensors will not suffice. For an autonomous car to comprehend its surroundings and to make necessary safety-critical decisions, it is imperative to use multiple sensor data, some of which are listed below:

ApplicationDescriptionOptimal Sensor Fusion Case
Adaptive Cruise Control (ACC)This application requires a very long range with small FoV.Camera + Lidar + Radar
Blind Spot Detection (BSD)Medium range of 40m with velocity estimationCamera + Radar
Lane Change Assist (LCA)Medium range of 40m with velocity estimationCamera + Radar
Automatic Emergency Braking (AEB)3D positional accuracy is highly requiredCamera + Radar
Automated ParkingSurround view with 3D precise object detectionCamera + Radar

Figure 1: Various sensors & their applications

Sensor Fusion Approaches

Sensor fusion data is required to achieve a reliable and consistent perception of the environment, and it is important to stress the fact that the heterogeneous sensor data needs to be utilized for its multimodality and redundancy. The market for sensor fusions has consolidated on three different approaches:

  • High-level abstracted fusion: This is one of the simplest forms of sensor fusion, wherein individual sensors with their own algorithm pipeline provide their input, which is then merged to provide the final picture.
  • Detection points fusion: The detection points from the different sensors are merged together to provide a deeper fusion of the multimodal sensor data. Multiple features can be extracted from the same object as seen by different sensors.
  • Raw data fusion: This approach is at the deepest abstract level, aiming to utilize all the available sensor data. Since the data to be processed is huge, high-end deep learning networks are required to extract any useful data to be used for any L5 autonomous decision.

Read more about sensor fusion in automotive ADAS and AD covering various aspects such as

  • SAE Levels of autonomy
  • Various sensors and comparison on more than 15 features
  • Sensor fusion in Level-3 and above
  • Practical challenges in Level-3 and above

Click Here

Challenges in Sensor Fusion

Even though the heterogeneous sensor approach can complement the limitations of each other and are capable of providing a complete environment sensing as required for autonomous vehicles, the technology to bring the final application has many challenges and lacks in making major strides in performance improvement.

The major technical issues faced in the heterogeneous sensor fusion at all levels of fusion can be grouped into three categories. All applications would face some or all of these issues, and solving them would require a proper solution:

  • Spatial & Temporal alignment
  • Data uncertainty handling
  • Sensor resolution and parameter differences

The sensor positioning and the respective live calibration is a challenge. Live calibration is a complex problem to be solved even in a homogenous sensor environment. Considering the different units of measurement and field of view, the heterogeneous sensor environment depends on the spatial alignment of the sensors. The frame rate of the sensors varies in a wide spectrum, and the fusion of data, at which the temporal point is, is always a tradeoff to achieve faster response or complex implementation.

The dependability of each sensor is different for each use case. This dependability or confidence factor also changes with the environment. Camera data cannot have the same confidence in bright, sunny light or in night conditions. The same goes for radars in open clear environments and in tunnels. Along with dependability, the availability of sensor data also has to be handled. A fused decision-making process cannot assume that both sensor data will be available at every frame and will have a high confidence level at each of those frames. The algorithms should be able to handle differences in the reliability of sensor data, noisy sensor data, missing and inconsistent data, accuracy and precision losses, etc.

Apart from this, the heterogeneous sensor data differs in many ways, such as depth resolution, angular resolution, positional accuracy, data format, outward alignment and Field of View. The point cloud densities are also very different. This would require the perception algorithm to be able to handle each sensor data differently. A classifier which can handle high-resolution camera data will not be able to perform the same as a high-resolution radar data. This is due to the different representations of the sensor data itself. Even though this factor is the major advantage of the fusion of bringing in multimodal data sets for efficient environment perception, bringing the multimodal data set is a challenge in itself.

Conclusion

It is established that multi sensor fusion is mandatory to meet the stringent requirements of use cases required for level 3 and above autonomous cars. The limitations in current non-fusion approaches to meet the Level 3 and above requirements are discussed in detail with two sample use cases. The architectural approaches required for each level of autonomous cars are presented in detail along with associated challenges. It is now a race against time to build practical solutions using sensor fusion to rapidly enable L3-L5 autonomy and realize the elusive future of autonomous driving.

By submitting this form, you authorize PathPartner to contact you with further information about our relevant content, products and services. You may unsubscribe any time. We are committed to your privacy. For more details, refer our Privacy Policy

Automotive
Camera & IoT
Multimedia

By submitting this form, you authorize PathPartner to contact you with further information about our relevant content, products and services. You may unsubscribe any time. We are committed to your privacy. For more details, refer our Privacy Policy

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Back to Top