Our employees have distinguished themselves and our organization as recipients of various patents and recognition.
Convolutional neural networks (CNNs) have made tremendous strides in object detection and recognition in recent years. However, extending the CNN approach to the understanding of video or volumetric data poses tough challenges, including trade-offs between representation quality and computational complexity, which is of particular concern on embedded platforms with tight computational budgets. This presentation explores the use of CNNs for video understanding.
Praveen Nayak, Tech Lead at PathPartner Technology, presents the “Using Deep Learning for Video Event Detection on a Compute Budget” tutorial at the May 2019 Embedded Vision Summit.
Face recognition systems have made great progress thanks to the availability of data, deep learning algorithms, and better image sensors. Face recognition systems should be tolerant of variations in illumination, pose, and occlusions, and should be scalable to large numbers of users with minimal need for capturing images during registration. Machine learning approaches are limited by their scalability. Existing deep learning approaches make use of either “too-deep” networks with increased computational complexity or customized layers that require large model files.
Praveen G.B., Technical Lead at PathPartner Technology, presents the “Creating a Computationally Efficient Embedded CNN Face Recognizer” tutorial at the May 2018 Embedded Vision Summit.
Face landmark detection is of profound interest in computer vision because it enables tasks ranging from facial expression recognition to understanding human behavior. Face landmark detection and tracking can be quite challenging, though, due to a wide range of face appearance variations caused by different head poses, lighting conditions, occlusions, and other factors.
Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the “Understanding and Implementing Face Landmark Detection and Tracking” tutorial at the May 2018 Embedded Vision Summit.
Since many road accidents are caused by driver inattention, assessing driver attention is important to prevent accidents. The distraction caused by other activities and sleepiness due to fatigue is the main causes of driver inattention. Vision-based assessment of driver distraction and fatigue must estimate face pose, sleepiness, expression, etc. Estimating these aspects under real driving conditions, including day-to-night transition, drivers wearing sunglasses, etc., is a challenging task.
Jayachandra Dakala, Technical Architect at PathPartner Technology, presents the “Approaches for Vision-based Driver Monitoring” tutorial at the May 2017 Embedded Vision Summit.
Face Pose Estimation From Rigid Face Landmarks For Driver Monitoring Systems
Source: Electronic Imaging, Autonomous Vehicles and Machines 2017, pp. 83-88(6)
Publisher: Society for Imaging Science and Technology
Face Recognition: Challenges and Issues in Smart City/Environments
Source: 2020 International Conference on COMmunication Systems & NETworkS (COMSNETS), Bengaluru, India, 2020, pp. 791-793,
Sensor fusion for In-Cabin Sensing
Technology’s ability to understand the mental and physical state of its users has come a long way in the last few years. Though every technology plays an important role in the development of automobiles, sensors are critical in improving road safety. Out of all of the sensors available, radar and camera sensors provide higher accuracy and flexibility for various in-cabin sensing features.
Different regulatory bodies like Euro NCAP ’25 Roadmap, USA HOT CARS Bill, and FMVSS 208 are making many in-cabin safety features mandatory such as;
- Life presence & occupancy detection
- Seat-belt status reminder
- Airbag suppression or low-risk deployment mandatory
- Detection of child/pet left behind
- Driver state monitoring.
As a result, neither radar nor camera solution alone can make the cut! This free hour-long webinar explores how we can utilize the individual but complementary abilities of these two smart solutions to meet all of the stated requirements and how can we achieve all of this with the desired accuracy.