Vehicles are equipped with numerous camera sensors both inside and outside them. These cameras are in place to assist the driver in driving, they are used for human vision or machine vision. With a fixed number of cameras already in place, there is a need to utilize them for both machine vision as well as human vision, this gives the need for dedicated imaging algorithms that work with the raw feed of the video from the camera making them suitable for the application.
Cameras in automotive applications serve across the spectrum in the automotive market at the lower end of the market we have to surround view and at the top newer applications such as e-mirror. With safety being a topmost priority of new car buyer’s adoption of the camera for the driver-assist system is increasing rapidly. Regulations are also not far behind and are mandating many safety features for drivers as well as a pedestrian with many such features using the camera as the primary sensor.
First, let us look at types of cameras used in Automotive specifically ADAS
In general, depending on the applications cameras are classified as monocular and stereo, the same is applicable for automotive use cases also.
Monocular cameras for ADAS: These are everyday camera modules with their primary function being monitoring, in automotive ADAS these cameras are used in applications such as BSD, Surround-view, Park assists, etc. The advantage of monocular cameras are they are cost-effective, easy to install and calibrate. However major disadvantage is object detection is not as effective as one would want it to be for L2+ application.
Stereo cameras for ADAS: Stereo cameras capture depth information of the surroundings this helps in complete surround mapping for L3+ applications where a car makes the necessary decision of object avoidance. Stereo cameras work similar to the human eye wherein they can easily differentiate objects in different lengths and distances between them. With edge over monocular cameras for AD, applications challenge with stereo cameras is that is comparatively difficult to calibrate it needs high-end automotive ECUs to make full use of them
Now that we understood the camera modules in ADAS, let us look into camera imaging algorithms to which these camera modules serve the data. Some of the algorithms used in ADAS applications are listed below, they are categorized broadly into two categories based on the placement of the camera sensor.
Exterior camera imaging algorithms
Some of the extreme conditions that cameras are subjected to outside the car are weather conditions such as sunlight, rain, and other lighting conditions caused due to change in the environment (like coming out of an indoor parking lot on a hot sunny day). Some of the imaging algorithms used in such cases are listed below:
- Black level correction
- Shading correction
- Denoise
- Tone mapping
- White balance
- Colour correction
- Sharpening
- Demosaic
There are few auto function some of them are listed below
Auto Exposure
This algorithm does automatic brightness adjustment concerning the amount of light that reaches the sensor
Auto White Balance
This is the process of removing unrealistic colour casts so that objects that appear white in the real world are rendered white in the captured image or video.
Interior camera imaging algorithms
Cameras are used in exterior for ADAS applications and AD. Until we reach full autonomy there is a need for human monitoring inside the cabin for fatigue and other behaviours. Driver monitoring is such an application where a camera is used to monitor the driver status, for AD this can be further upgraded for user authentication. One of the major camera imaging algorithms used here is RGB-IR.
RGB-IR
Applications such as driver monitoring systems used IR cameras to estimate driver behaviour for fatigue and alertness. However, having a single camera for one specific application may not be cost-effective in the long run. This has implemented RGB+IR sensors inside the car, this allows the use of both IR as well as RGB format for applications. Having both RGB and IR streams helps in having multiple uses for the same camera, for example, IR can be used for driver monitoring application, RGB can be used for video calling or cabin monitoring application
Image Signal Processing
Camera modules provide raw information to the automotive ECU; however, this cannot be used to make useful decisions. As explained, this is sorted out by using an ISP which converts this raw image to a high-quality image. ISPs perform many activities for this conversion such as noise reduction, lens shading gamma correction, and more. Read more about it here
