Autonomous machines/ vehicles are all set to drive along the road. As roads get denser with other vehicles, it is difficult for drivers as well as autonomous cars to identify all the traffic signs on the path. Chances of missing out on crucial signs that may lead to fatal accidents cannot be neglected. To assists drivers and autonomous cars to overcome this problem camera-based traffic sign recognition systems are used as a part of the advanced driver assistance system (ADAS). Some of the reasons to miss out on these traffic sign
- Poor illumination
- Traffic density
- Lack of concentration
Poor illumination and traffic density, two of the major reasons for missing a traffic sign can be solved by using a robust traffic sign recognition system. However, lack of concentration is something which can be solved by taking two approaches, one is using a driving monitoring solution that monitors the driver for any in-attention and alerts on same, the second approach is similar to other two reasons, wherein a traffic sign recognition system solves the problem.
A traffic sign recognition system is made of a camera that is front facing with a wide field of view covering the entire road for any signs commissioned by traffic regulatory bodies. An embedded platform enabled with algorithms that can recognize any traffic sign. In general practise, these algorithms take a CNN-based approach rather than a classic image processing technique, which helps the system to identify and classify a diverse set of signs.
Traditional machine learning for traffic sign recognition
Traditional machine learning involved a time-consuming part of feature extraction. Algorithms like SVM are used in traditional machine learning. Algorithms used in traditional machine learning for traffic sign recognition are-
- Image pre-processing: Spatial-temporal noise removal
- Feature extraction: Haar, HOG, and Keypoint detector
- Object detection: Cascade classifiers, Decision tree, and support vector machine (SVM)
CNN based approach is suggested for traffic sign recognition as it requires real-time performance on edge devices
CNN over Traditional Machine Learning algorithms for Traffic Sign Recognition
One might argue that, if traditional machine learning can work as per intent for an application like traffic sign recognition on the edge what is the need for CNN There are numerous advantages that CNN gives over traditional machine learning, some are listed below.
- CNN models offer better performance in terms of recall and accuracy
- The accuracy of the algorithm increases with increasing data
- Requires no manual feature extraction
CNN for traffic sign recognition
The deep learning approach uses neural networks to complete the intensive task of traffic sign recognition. There are various algorithms similar to the traditional method that is listed in the previous section;
- Image pre-processing: low light enhancement
- Feature extraction: Neural network encoders
- Object classification:
- Object detection: SSD, YOLO, R-CNN, (like alexnet-classification network)
Developing CNN network for traffic sign recognition
A typical traffic sign recognition system consists of algorithms for localization and classification in the same order. Localization refers to identifying the position of the traffic sign on the frame and classification refers to matching the sign to a pre-trained set of classes of traffic signs.
This is an optional step/ stage where a CNN model is trained on a diverse dataset. In the case of traffic sign recognition, there are readily available datasets like- GTSDB for traffic sign detection (German traffic sign detection benchmark) and GTSRB for traffic sign recognition (German traffic sign recognition benchmark). This dataset consists of a single image/ multi-class classification with more than 40 classes and 50,000 signs. The said model is trained on a GPU like, Nvidia AGX XavierTM or Google’s tensor processing unit (TPU). However, TPU is a cloud-based deep learning acceleration solution and one might not be comfortable with uploading own dataset on a public cloud. Hence GPUs are preferred which accelerates learning for any deep learning model hence same applies to traffic sign recognition.
In general, practise localization and recognition models are trained separately. However, developers prefer using a custom dataset which allows showcasing robustness of the algorithm.
Embedded platform porting
Until now from training to CNN model development majority of the work was done using a GPU or cloud platform. However, in the real-world application having a GPU or other powerful computation system in a car would significantly increase the cost thereby driving the buyers away. Hence the algorithm needs to be able to run on a low power embedded platform with high efficiency and recall rate.
Camera-based traffic sign recognition is a must for any cars that plan to achieve SAE level 3 and above autonomous driving capabilities. However, with the challenge of image processing for traffic sign recognition, the major drawback remains porting the traffic sign recognition algorithms on an automotive-grade embedded platform with high efficiency and recall rate. Read on more about "Real-Time Traffic Sign Recognition using Deep Network for Embedded Platforms" by PathPartner ADAS team Read on more about "Real-Time Traffic Sign Recognition using Deep Network for Embedded Platforms" by PathPartner ADAS team
- Top 7 applications of Cameras in Advanced Driver Assistance Systems and Automated Driving
- 3 things to watch out for in development of autonomous driving-2019
- Seven technologies that are accelerating autonomous driving capabilities- last one for sure is the one everyone is waiting for!
- Design Considerations for Porting Real-Time ADAS Applications to Embedded Platforms