3 things to watch out for in development of autonomous driving-2019

Date: March 29, 2019

Autonomous cars are no longer a distant dream, with leading autonomous driving start-ups in the final stage of testing and prepping for the first deployment. Initially catering only to early adopters, autonomous cars will soon reach the mass market. There are various technologies driving this innovation. 2019 has a lot of new innovation to offer for autonomous driving capabilities and other Automotive ADAS applications. Here are the top three things that will boost the developmental process of autonomous cars for the mass market.

Edge Computing

Autonomous cars are equipped with various Lidar/Radar/Camera sensors for mapping of surroundings and object detection; these sensors help them in navigating without any human intervention. The future holds a scenario in which all the vehicles on the street are autonomous, and this puts forth the challenge of communicating to other cars (V2V) on the streets and infrastructure (V2I) in general vehicles to any other system (V2X). This is achieved by means of cloud computing. Even though cloud computing is the best solution for handling such enormous data, there will definitely be a lag due to network issues and available bandwidth. To overcome this problem, edge computing can be incorporated to reduce the load on the cloud and reduce any delay in data transmission, which may be a life-or-death instance in autonomous cars where every millisecond matters a lot. Edge computing can be defined as processing certain data at the sensor level rather than using cloud infrastructure.

Edge computing in overall automated driving

Figure 1: Edge Computing

Lidar/ Radar and Beyond

Autonomous cars, as explained earlier, have numerous sensors for object detection and surrounding mapping. Lidar and Radar are commonly used for this application; however, both each of them has a lot of advantages and disadvantages. This leads us to sensor integration, where Lidar/Radar is coupled with a camera for clear understanding. In general practice, Lidar/ Radar is coupled with an RGB camera since Lidar/Radar is monochromatic, it cannot differentiate coloured objects, and that’s when the RGB camera comes into play. An RGB camera alone cannot be used for object detection, as explained here. Lidar/Radar systems utilize various object detection algorithms to create a 3D point cloud of the surrounding area; this alone can be used for ADAS applications such as Blind Spot Recognition (BSD), Park Assist, lane departure warning, and collision avoidance. Camera sensors as standalone sensors are used in applications like Traffic sign recognition, pedestrian detection, semantic segmentation, and road lane marking. Camera-based systems use advanced CNN models for this application. When the camera and Lidar/Radar are coupled, this helps in the complete understanding of surroundings and navigation for autonomous cars.

Sensor fusion of camera with radar/lidar

Figure 2: Lidar/Radar and camera sensor fusion

Computer Vision

As per research conducted by the Insurance Institute for Highway Safety (IIHS), there were a total of thirty-four thousand fatal crashes in USA during 2017, the majority of which could have been avoided with driver assist or aide systems. Computer Vision helps in the classification and identifications of various objects in the path of the vehicle with high accuracy. Computer vision is used in various driver assist systems (ADAS) such as Lane departure warning (LDW), Blind Spot Monitoring (BSD), Forward Collision Warning and more. These systems use advanced CNN models and machine vision algorithms for identifying and classifying road objects. Road objects include vehicles, traffic signs and pedestrians. CV is not only limited to driver aide systems and autonomous cars but also streamlines various production bottlenecks that are present in automotive manufacturing. CV is also deployed in automating traffic management.


Autonomous cars are no longer a thing of the future. With ground-breaking innovation in the semiconductor space for high computing automotive grade processors, autonomous cars and various ADAS applications are seeing the light of day. As mentioned above, this year we will witness a number of innovations in autonomous cars thanks to technologies like edge computing, sensor fusion and computer vision. These technologies are the much-needed breakthrough in automated driving and bringing autonomous cars to mass market.

Further reading

You might also like these blogs

Developing Convolutional Neural Networks for Face Recognition

Face recognition has wide applications, from consumer electronics to automotive…

Read More
Digital Twin Technology – Why Should We Talk About it?

What is a Digital Twin? The definition of a digital…

Read More
How accurate should your asset tracking system be?

‘How much accuracy can we expect?’ is often the first…

Read More