Embedded Vision Application – Design approach for real time classifiers

Overview of classification technique

Object detection/classification is a supervised learning process in machine vision to recognize patterns or objects from data or image. It is a major component in Advanced Driver Assistance Systems (ADAS) as it is used commonly to detect pedestrians, vehicles, traffic signs etc.
Offline classifier training process fetches sets of selected data/images containing objects of interest, extract features out of this input and maps them to corresponding labelled classes to generate a classification model. Real time inputs are categorized based on the pre-trained classification model in an online process which finally decides whether the object is present or not.
Feature extraction uses many techniques like Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP) features extracted using integral image etc. Classifier uses sliding window, raster scanning or dense scanning approach to operate on an extracted feature image. Multiple image pyramids are used (as shown in part A of Figure 1) to detect objects with different sizes.

Computational complexity of classification for real-time applications

Dense pyramid image structures with 30-40 scales are also being used for getting high detection accuracy where each pyramid scale can have single or multiple levels of features depending upon feature extraction method used. Classifiers like AdaBoost, use random data fetching from various data points located in a pyramid image scale. Double or single precision floating points are used for higher accuracy requirements with computationally intensive operations. Also, significantly higher number of control codes are used as part of classification process at various levels. These computational complexities make classifier a complex module to be designed efficiently to achieve real-time performance in critical embedded systems, such as those used in ADAS applications.
Consider a typical classification technique such as AdaBoost (adaptive boosting algorithm) which does not use all the features extracted from sliding window. That makes it computationally less expensive compared to a classifier like SVM which uses all the features extracted from sliding windows in a pyramid image scale. Features are of fixed length in most of the feature extraction techniques like HOG, Gradient images, LBP etc. In case of HOG, features contain many levels of orientation bins. So each pyramid image scale can have multiple levels of orientation bins and these levels can be computed in any order as shown in part B and C of Figure 1.
pyramid image
Figure 1: Pyramid image scales with multiple orientation bins levels

Design constraints for porting classifier to ADAS SoC

Object classification is generally categorized as a high level vision processing use case as it operates on extracted features generated by low and mid-level vision processing. It requires more control codes by nature as it involves comparison process at various levels. Also, as mentioned earlier it involves precision at double/float level. These computational characteristics depict classification as a problem for DSP rather than a problem for vector processors which has more parallel data processing power or SIMD operations.
Typical ADAS processors, such as Texas Instruments’ TDA2x/TDA3x SoC, incorporate multiple engines/processors targeted for high, mid and low level vision processing. TMS320C66x DSP in TDA2x SoC has fixed and floating-point operation support, with maximum 8-way VLIW to issue up to 8 new operations every cycle, with SIMD operations for fixed point and fully-pipelined instructions. It has support for up to 32, 8-bit or 16-bit multiplies per cycle, up to eight, 32-bit multiplies per cycle. EVE processor of TDA2x has 512-bit Vector Coprocessor (VCOP) with built-in mechanisms and vision-specialized instructions for concurrent, low-overhead processing. There are three parallel flat memory interfaces each with 256-bit load-store memory bandwidth providing a combined 768-bit wide memory bandwidth. Efficient management of load/store bandwidth, internal memory, software pipeline, integer precisions are major design constraints for achieving maximum throughput from these processors.
Classifier framework can be redesigned/modified to adapt to vector processing requirements thereby processing more data in one instruction or achieving more SIMD operations.

Addressing major design constraints in porting classifier to ADAS SoC

Load/Store bandwidth management

Each pyramid scale can be rearranged to meet limited internal memory. Functional modules and regions can be selected and arranged appropriately to limit DDR load/store bandwidth to the required level.

Efficient utilization of limited internal memory and cache

Image can be processed on optimum sizes and memory requirements should fit into hardware buffers for efficient utilization of memory and computation resources.

Software pipeline design for achieving maximum throughput

Some of the techniques that can be used to achieve maximum throughput from software pipelining are mentioned below.
  • Loop structure and its nested levels should fit into the hardware loop buffer requirements. For example C66x DSP in TDA3x has restrictions over its SPLOOP buffer such as initiation interval should be
  • Unaligned memory loads/store should be avoided as it is computationally expensive and its computation cycles are twice compared to aligned memory loads/stores in most of the cases.
  • Data can be arranged or compiler directives and options can be set to get maximum SIMD load, store and compute operations.
  • Double precision operations can be converted to floating point or fixed point representations but retraining of offline classifier should be done upon these precision changes.
  • Inner most loop can be made simple without much control codes to avoid register spilling and register pressure issues.
  • Division operations can be avoided with corresponding table loop up multiplication or inverse multiplication operations.

Conclusions

Classification for real-time embedded vision applications is a difficult computational problem due to its dense data processing requirements, floating point precision requirements, multilevel control codes and data fetching requirements. These computational complexities involved in classifier design limits its vector processing power significantly. But classifier framework on a target platform can be redesigned/modified to leverage platform architecture vector processing capability by efficiently utilizing techniques such as load/store bandwidth management, internal memory and cache management and software pipeline design.

References:

TDA2X, A SOC OPTIMIZED FOR ADVANCED DRIVER ASSISTANCE SYSTEMS Dr. Jagadeesh Sankaran, Senior Member Technical Staff, Texas Instruments Incorporated. Dr. Nikolic Zoran, Member Group Technical Staff, Texas Instruments Incorporated. 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)
Pathpartner author Sudheesh
Sudheesh TV
Technical Lead
Pathpartner author Anshuman
Anshuman S Gauriar
Technical Lead

Unconventional way of speeding up of Linux Boot Time

Fast boot is a requirement on most embedded devices. For good user experience it is important that a hand-held camera or a phone becomes usable seconds after switching it on. Apart from user experience, fastboot time also becomes a non-negotiable hard requirement for some applications. For example, a network camera or a car rear view camera must be accessible as soon as possible after power up.
A brief introduction to Linux boot process:
The boot time for a device consists of:
  • Time taken by bootloader
  • Time taken by Linux kernel initialization up to the launch of first user space process
  • Time required for user space initialization, including launching all required daemons, running all Linux init scripts
Boot loader and Linux kernel init time can be brought down to less than 1 second on modern embedded platforms. Some standard techniques for this have been documented here.
In most boot time optimization problems, more than 80% of the boot time is taken by user space. Optimization of user space on the other hand cannot be standardized, because the user space varies from application to application.
The first user space process that runs on Linux startup is called ‘init’. The init process should take care of all user space initialization required to boot up the system. This includes:
  • Mounting all filesystems
  • Creating device nodes
  • Starting all daemons
  • Performing other functions specific to the use case, like loading of firmware, starting of userspace programs
Reducing user space boot time therefore means optimizing this initialization process. The standard init process for Linux systems is Sysvinit (or busybox init for userspace systems)

Sysvinit:

Both Sysvinit and busybox init read /etc/inittab to find out what needs to be done at boot up. The first line of the init tab file defines the default ‘run level’ of the system. The default process executed by sysvinit is /etc/init.d/rcS, which in turn calls /etc/init.d/rc . The ‘rc’ script basically executes all scripts in /etc/init.d/rcN folder in order (where N is the run level).
Figure2
Figure 1: sysvinit init scripts
 
Figure1
Figure 2 : inittab file
 
The shortcomings of Linux Sysvinit are therefore apparent:
  • The order and dependencies of init scripts must be carefully assigned. As a rule, S01 will be executed before S02 and so on. Listing out the services and enforcing dependency has to be done manually, making the process tedious and error prone.
  • The init.d scripts are run in serial order. This leaves CPU resources underutilized and increases the system boot time

Systemd:

Linux Systemd is an alternative to Sysvinit. Systemd replaces the concept of runlevels in Sysvinit with ‘targets’. A target is a set of Systemd ‘units’. The Systemd units can be roughly understood as ‘services’. Each service is a file which consists of statements that define its behavior, for example, the process it starts, its dependencies, and many other parameters. The dependencies between services can be enforced by means of the ‘Wants’ and ‘Required’ sections. All the services that a service depends on should be listed under ‘Wants’ (good to have) or ‘Requires’ (necessary to have).
Figure3
Figure 3: example Systemd service
 
Systemd has a set of pre-defined targets that get started at system boot. The services that are need at boot up should be linked under the directory ‘/etc/systemd/system/default.target.wants’.
Systemd starts by looking at the basic.target and launches all services required for that target, following the dependency chain. All services required by a particular service are started before that service is started. All services that are not dependent on each other are started in parallel. Thus, Systemd achieves aggressive parallelization during boot up.
Figure4
Figure 4: Adding custom Systemd service to boot target
 
Figure5
Figure 5: Complete list of systemd services and dependencies on a system
 
Systemd reduces boot time in two ways:
  • By way of correct configuration of dependencies and targets, it is possible to do only required init for boot, leaving other activities for after boot. For example, instead of mounting all file systems, we can configure the boot target to be dependent on only root file system. In this way the boot time target will start without waiting for other filesystem mounts. Since filesystem mounting usually takes a long time, this will speed up the boot time. This way, it is very easy to control what gets started at boot up, and also track dependencies in a centralized way.
  • Systemd launches services in parallel unless a serialized dependency is enforced. This speeds up boot process significantly, leading to better CPU utilization during boot.
Thus Systemd optimizes boot time, by allowing fine grained and easy control over dependencies, and then by starting services in parallel. It comes with additional advantages of fine control of service dependencies, cleanly handling service behaviour in case of failure (restart vs fail), handles logging and system watchdog.
Systemd also comes with tools like Systemd-blame and Systemd-analyze that give a pictorial representation of how services were started. These tools make it quite convenient to know what is happening and optimize things accordingly.
It is easy to bring down user space boot time to less than 4 or 5 seconds on an embedded device by use of Systemd. Further optimizations can be squeezed out on a case by case basis. We have achieved a total boot time of 3 seconds on ARM running at 1 GHz,
Systemd is available for download here : https://www.freedesktop.org/software/systemd/ and can be cross-compiled for embedded platforms.
This article provides methodologies for improving boot time in PathPartner solutions which includes complete boot time optimization (bootloader, kernel parameters and userspace) on all embedded platforms.
Pathpartner author Komal
Komal Jaysukh Padia
Technical Lead

How to build Angstrom Linux Distribution for Altera SoC FPGA with OPEN CV & Camera Driver Support

This blog will guide you through the steps to build Linux OS with OpenCV & Camera Driver support for Altera SoC FPGA.
If your Real time Image processing applications like Driver Monitoring system on SoC FPGA’s are dependent on Open CV, you have to develop Open CV build environment on the target board. This blog will guide you through the steps to build Linux OS with OpenCV & Camera Driver support for Altera SoC FPGA.
In order to start building the first Linux distribution for the Altera platform, you must first install the necessary libraries and packages. Follow the Initialization steps for setting up the PC host
The required packages to be installed for Ubuntu 12.04 are
$ sudo apt-get update $ sudo apt-get upgrade $ sudo apt-get install sed wget cvs subversion git-core coreutils unzip texi2html texinfo libsdl1.2-dev docbook-utils gawk python-pysqlite2 diffstat help2man make gcc build-essential g++ desktop-file-utils chrpath libgl1-mesa-dev libglu1-mesa-dev mercurial autoconf automake groff libtool xterm *

IMPORTANT:

Please note that * indicates that the command is one continuous line of text. Make sure that the command is in one line when you are pasting it.
If the host machine runs 64 bit version of the OS, then you need to install the following additional packages:
$ sudo apt-get install ia32-libs
On Ubuntu 12.04 you will also need to make /bin/sh point to bash instead of dash. You can accomplish this by running: and selecting ‘No’.
sudo dpkg-reconfigure dash
Alternatively you can run:
sudo ln -sf bash /bin/sh
However, this is not recommended, as it may get undone by Ubuntu software updates.

Angstrom Buildsystem for Altera SOC: (Linux OS)

Download the scripts needed to start building the Linux OS. You can download the scripts for the angstrom build system from https://github.com/altera-opensource/angstrom-socfpga/tree/angstrom-v2014.12-socfpga
Unzip the files to angstrom-socfpga folder:
$ unzip angstrom-socfpga-angstrom-v2014.12-socfpga.zip –d angstrom-socfpga $ cd angstrom-socfpga
These are the setup scripts for the Angstrom buildsystem. If you want to (re)build packages or images for Angstrom, this is the thing to use.
The Angstrom buildsystem uses various components from the Yocto Project, most importantly the Openembedded buildsystem, the bitbake task executor and various application/ BSP layers.
Navigate to the source folder, and comment the following line in the layer.txt file
$ cd source $ gedit layers.txt & meta-kd4,https://github.com/Angstrom-distribution/metakde. git,master,f45abfd4dd87b0132a2565499392d49f465d847 * $ cd .. (Navigate back to the head folder)
To configure the scripts and download the build metadata
$ MACHINE=socfpga_cyclone5 ./oebb.sh config socfpga_cyclone5
After the build metadata, you can download the meta-kde4 from the below link and place it in the sources folder, as this was earlier disabled in the layers.txt file http://layers.openembedded.org/layerindex/branch/master/layer/meta-kde4/
Source the environment file and use the below commands to start a build of the kernel/bootlaoder/rootfs:
$. /environment-angstrom $ MACHINE=cyclone5 bitbake virtual/kernel virtual/bootloader console-image
Depending on the type of machine used, this will take a few hours to build. After the build is completed the images can be found in:
Angstrom-socfpga/deploy/cyclone5/
After the build is completed, you will find the u-boot, dtb, rootfs and kernel image files in the above folder.

Adding the OPENCV Image to the rootfs:

To add OpenCV to the console image(rootfs) we need to modify the local.conf file in the conf folder:
$ cd ~/angstrom-socfpga/conf $ gedit local.conf &
In the local.conf file navigate to the bottom of the file and add the following lines and save the file:
IMAGE_INSTALL += “ opencv opencv-samples opencv-dev opencv-apps opencv-samples-dev opencv-static-dev “
Then build the console image again using the following command:
$ cd .. $ MACHINE=cyclone5 bitbake console-image
After the image is built the rootfs will contain all necessary OpenCV libs for development and running opencv based applications.

Enabling Camera Drivers in the Kernel :

The Linux Kernal v3.10 has an in built UCV camera driver which supports a large number of USB cameras. In order to enable it, you need to configure the kernel using the menuconfig option:
$ MACHINE=cyclone5 bitbake virtual/kernel –c menuconfig
The above command opens a config menu window. From the menuconfig window enable the following to enable UVC:
Device Drivers Multimedia support Media USB Adapters [*] USB Video Class [*] UVC input events device support [*]
Save and exit the config menu then execute the following command:
$ MACHINE=cyclone5 bitbake virtual/kernel
The new kernel will be build with the UVC camera drivers enabled and will be available in the /deploy/cyclone5 folder.
For the camera to work, the coherent pool must be set to 4M, this can be done as follows:-

U-Boot Environment Variables

Boot the board, pressing any key to stop at U-Boot console. The messages dispayed on the console will look similar to the following listing:
U-Boot SPL 2013.01.01 (Jan 31 2014 – 13:18:04) BOARD: Altera SOCFPGA Cyclone V Board SDRAM: Initializing MMR registers SDRAM: Calibrating PHY SEQ.C: Preparing to start memory calibration SEQ.C: CALIBRATION PASSED ALTERA DWMMC: 0   U-Boot 2013.01.01 (Nov 04 2013 – 23:53:26)   CPU : Altera SOCFPGA Platform BOARD: Altera SOCFPGA Cyclone V Board DRAM: 1 GiB MMC: ALTERA DWMMC: 0 In: serial Out: serial Err: serial Net: mii0 Warning: failed to set MAC address   Hit any key to stop autoboot: 0 SOCFPGA_CYCLONE5 #

Configuration of U-Boot Environment Variables

SOCFPGA_CYCLONE5 #setenv bootargs console=ttyS0,115200 vmalloc=16M coherent_pool=4M root=${mmcroot} rw rootwait;bootz ${loadaddr} – ${fdtaddr} *

Save of U-Boot Environment Variables

SOCFPGA_CYCLONE5 #saveenv

Boot Kernel

SOCFPGA_CYCLONE5 #boot
Following all the above guidelines, you should be able to build Angstrom Linux Distribution for Altera SoC FPGA with OPEN CV & Camera Driver Support. This build was successfully implemented on Altera Cyclone V SoC.
Pathpartner author Idris Tarwala
Idris Iqbal Tarwala
Sr. VLSI Design Engineer

HEVC without 4K

The consumer electronics industry wants to increase the number of pixels on every device by 4 times. Without HEVC this will quadruple the storage requirements for video content and clog the already limited network bandwidths. To bring 4K or UHD to the consumer, HEVC standard is essential as it can cut these requirements at least by half. There is no doubt about it. But what the consumer electronics industry wants might not essentially be what the consumer wants. Case in point is 3D TVs, which failed to take off due to the lack of interest from the consumers and original content creators. The same thing might happen to 4K. If the 4K/UHD devices fail to take off, where will HEVC be?
Here, we make a case for HEVC.
HEVC without 4k
 
According to the data released by CISCO, video will be the highest consumer of network bandwidth in the years to come and video has the highest growth rate across all segments. With HEVC based solutions these over clogged networks can take a breather.
Let’s look at how deploying HEVC solutions is advantageous to the major segments of the video industry.

1. Broadcast:

Broadcast industry is a juggernaut which moves very slowly. They will have the biggest advantages in cost saving and consumer satisfaction by adopting HEVC. But, they have to make the highest initial investment as well. There are claims that UHD alone does not add much to the visual experience at a reasonable TV viewing distance for a 40-50 inch TV. The industry is pushing for additional video enhancement tools such as Higher Dynamic Range, more color information and better color representation (BT 2020). UHD support in broadcast may not be feasible without upgrading the infrastructure and with the advantage that HEVC can provide to UHD video including it in the infrastructure upgrade is the optimal choice. But if UHD fails to attract the consumer then what will happen to HEVC? Without UHD broadcast becoming a reality, introduction of HEVC into broadcast infrastructure can be delayed heavily. On the other hand contribution encoding can be heavily benefited from HEVC with reasonable change in infrastructure. But whether broadcast companies adapt to HEVC just for contribution encoding without the support of UHD depends purely on cost of adaption and cost saving by the adaption.

2. Video surveillance:

Surveillance applications are increasing each day and now there is an added overhead for backups on the cloud. The advantage with video surveillance applications is that it does not need backward compatibility. Hence HEVC is an ideal solution for the industry to cut the storage costs for the current systems or to keep it at the same level for new generation of systems which can store more video surveillance data at higher resolutions. ASIC developers are already developing HEVC encoders and decoders and it’s just a matter of time video surveillance systems based on HEVC hit the market. But upgrading current video surveillance systems to support HEVC may not be feasible without the hard struggle of making legacy hardware support HEVC.

3. Video Conference:

Video conference applications can be a tricky situation as it needs to be highly interoperable and it needs backward compatibility with the existing systems. Professional video conference systems might have to support both HEVC as well as earlier codecs for working with already available systems. On the other hand, general video conferencing solutions such as gtalk or Skype would have a problem of licensing HEVC. As none of the current day browsers or Operating systems (Except Windows 10 which most probably has just the decoder) has announced support for HEVC. But advantage that HEVC can bring into video conferencing application can be magnificent. Irrespective of advancement in bandwidth availability and with the introduction of high speed 3G and 4G services, quality of video conferencing experience has remained poor. This can be massively improved with the help of HEVC, which has the potential to enable HD video calling on a 3G network. With or without the help of UHD, at least the professional video conferencing systems will adapt HEVC unless another codec (likes of VP9, Daala) promises better advantages.

4. Storage, streaming and archiving:

Advantage of upgrading archived database to use HEVC needs no explanation. Imagine the number of petabytes of memory that can be saved if all archived videos are converted to HEVC. OTT players like Netflix are already on the process of upgrading to HEVC as it will help them to reduce the burden on ISP providers. And it will also help in reducing storage and transmission cost in video streaming application. Converting such huge data base from one format to another will not be easy. OTT and video streaming applications would need scalable HEVC where each video need to be encoded at different resolutions and different data rates. This would need multi instance encoders continuously running to encode these videos at very high quality. However cost saving in these application by adapting to HEVC is very huge and upgrading to HEVC in storage space will become inevitable.

5. End consumer:

The end consumer gets exposed to HEVC at different levels:
Decoders in the latest gadgets viz. TV, mobile phone, tablet, gaming console and web browsers. Encoders which are built in wherever there is a camera, be it in video calling on mobile phone, tablet or laptop and video recording in mobile phone or standalone camera.
It is difficult to convince the less tech savvy end consumer about the advantages of HEVC, but they are a huge market. In the US it costs around 1$ for 10 Mega Bytes of data. HEVC can indeed help the consumers with higher video quality for the same cost or half the cost for the current video quality. Since HEVC is important to the consumers, almost all chip manufacturers are supporting HEVC or it is in their road map. Even there are consumer electronic products currently in the market with HEVC encoders built in. Definitely HEVC support will be a differentiating factor for the end consumer looking for a new gadget.
Deploying HEVC based solutions will definitely yield gains in the long term after the profits due to the reduction in bandwidth overtake the loss of the initial investments. This is true with each of the segments which we have discussed. With 4K or UHD this initial investment can be merged to higher quality offerings and the costs could be offset. But without any change in the resolution or any other feature, the returns on HEVC investments are definitely high. When the entire video industry adopts the newer and better codec standard HEVC, the gains will multiply.
Pathpartner author Prashanth
Prashanth NS
Technical Lead
Pathpartner author Shashikantha
Shashikantha Srinivas
Technical Lead

Future trends in IVI solutions

In the immediate future we shall witness the next big phase of technology convergence of Automotive Industry with Information technology around epicenter of ‘Connectivity’. There are several reasons for this revolutionary fusion. The first one being: exponential rise in consumer demand. Connectivity on the road is not a luxury anymore for our internet savvy generation; who would be 40% of new automobile users. They are expecting more than just in vehicle GPS navigation. Some developed countries are mandating potential safety features like DSRC on Wi-Fi, 360 degree NLOS, V2V exchange info etc., to reduce the number of accidents and casualties. Inclusion of these safety features on the dashboard is another reason why IVI systems and solutions are the current hottest sell in automotive industry. With these triggering points, automobile manufacturers are now bridging the gap between non IVI technologies and existing dashboards solutions.
The current dashboard solutions are provided by software giants Apple and Google who are very well ahead in providing SDKs for IVI solutions. A few other companies also demonstrated dashboards in CES 2015 with IVI solutions, but CarPlay and Android Auto have got the world’s attention with their adaptability, portability and expandability for various car market segments. Market experts are predicting the key IVI solutions would definitely be Android Auto and CarPlay. These current IVI solutions are only the beginning of a fast evolving technology.
The experts are forecasting the evolution in IVI features is based on maturity of architecture. In primary phase, Smartphone attached dashboard models using connectivity models like USB and Bluetooth continue to be widely used. By 2017, around 60% vehicles are expected to get connected by smartphones. The reuse of smartphone computation power and mobile integrated technologies like telephony, Multimedia, GPS, storage, power management and voice based systems would be in focus. Following next is the progression in direction of connecting beyond car. The conjunction with IoT and cloud technologies would lead involvement of ecosystems like OEM services, HealthCare, education, smart energy systems. Further into future, external control and integration of car ecosystem would be possible including On Board Diagnostics (OBD) systems, usage of SoC for smarter, reliable and better performing features on dashboard, augmented reality and many more options.
The current IVI systems’ architecture is designed to be flexible in adapting future upgrades (such as changing your Mobile phone model, upgrade of OS etc.). IVI solution core can be broadly divided into three categories: Hardware, underlying Operating System and User experience enhancing applications. The architecture mainly constitutes of horizontal and vertical layers.
The vertical layer is made of:
  • Proprietary protocols.
  • Multimedia rendering solutions.
  • Core stack
  • Security and UI
Whereas the Horizontal layer includes:
  • Hardware
  • OS
  • Security Module
Proprietary Portable Protocols are rudiments of IVI connectivity modules and the UI development of IVI head units can be deployed with core engines of Java, Qt, and Gtk. For user connectivity and a better User experience Wi-Fi, LBTE, advance HMI, OBD tools and protocols help, which are still under development.
From a car manufacturer’s point of view centralized upgradable dashboard with all IVI solutions and distinct non IVI technologies integrated would be the prime factor in choosing a car. Here the middleware of dash board needs to play crucial support, integration for multiple IVI solutions providing device independence experience. The flexibility in amending the upcoming IVI technology like data interfaces for application developers are also deciding factors.
Although IVI core solutions are fundamental differentiating factors for end user, car OEMs need satisfactory single roof vendor owning expertise in various IVI, non IVI technologies including system architecture, OS development, protocol, hardware acquaintance and applications. Pathpartner, a product engineering service provider for Automotive Embedded Systems, embraces all these requirements very well. PathPartner with visionary leadership has already crossed the threshold into Automotive Infotainment industry, with a strong business model, which comprise of development activities like porting on multiple platforms such as Freescale, Texas Instruments on infotainment systems based on Android, Linux, and QNX, customized middleware for Android Auto, MirrorLink and CarPlay.
We at PathPartner, with a dedicated team of engineers equipped with niche expertise in embedded products, successfully delivered the certified applications from platforms such as iOS and Windows and continue to work on other IVI solutions for renowned OEM’s.
Pathpartner author Kaustubh
Kaustubh D Joshi

HEVC Compression Gain Analysis

HEVC, the future video codec needs no introduction as it is slowly being injected into multimedia market. Irrespective of very high execution complexity compared to its earlier video standards, there are few software implementations proven to be real time on multi-core architectures. While OTT players like Netflix are already on the process of migrating to HEVC, 4k TV sales are increasing rapidly as HEVC promises to make UHD resolution a reality. ASIC HEVC encoders and decoders are being developed by major SOC vendors that will enable HEVC in hand held battery operated devices in the near future. All these developments are motivated by one major claim

‘HEVC achieves 50% better coding efficiency compared to its predecessor AVC’.

HEVC compression gain
Initial experimental results justifies this claim and shows on an average 50% improvement using desktop encoders and approximately 35% improvement using real time encoders compared to AVC. This improvement is mainly achieved by set of new tools and modification introduced in HEVC but, is not limited to higher block sizes, higher transform sizes, extended intra modes and SAO filtering. You need to note that, there is 50% improvement on an average when results for a set of input contents are accounted. This does not ensure half the bit rate with same quality for every input content. Now, we will tear down this ‘average’ part of the claim and discuss in which situation HEVC is more efficient considering four main factors.
  1. Resolution
  2. Bit rate
  3. Content type
  4. Delay configuration

1. Resolution:

HEVC is expected to enable higher resolution video and to substantiate this, results prove higher coding efficiency gains at higher resolutions. At 4k resolutions, compression gains can be more than 50%. This makes it possible to encode 4k resolution contents with 30 frames per second at bit rates of 10 to 15 Mbps with decent quality. Reason behind such behavior is a feature/tool introduced in HEVC that contributes more than half the coding efficiency gains i.e. higher coding block sizes. At high resolution, larger block sizes leads to better compression as neighboring pixels have higher correlation among them. It is observed that 1080p sequences have on an average 3-4% better compression gains compared to their 720p counterparts.

2. Bitrate:

Encoder results indicate better compression gains at lower and mid-range bit rates compared to very high bit rates. At low QPs, transform coefficients contribute more than 80% of the bits. Having larger coding unit helps in saving bits used for MVD and other block header coding. Bigger transform blocks will have better compression gains as it has large number of coefficients for energy compaction. But at high bit rate, header bits takes a small percentage of the bit stream suppressing the gains from higher block sizes. And low QPs reduce the number of non-zero coefficients which limit the gains possible from high transform blocks.
We have observed that, for ParkJoy sequence BD-rate gains were 8% better at QP range of 32-40 compared to QP range of 24 – 32. Similar behavior has been found in most of the sequences.

3. Content type:

Video frames with uniform content has shown better compression gains as it aids higher block sizes. Encoder tend to select lower block sizes for frames with very high activity which makes the block division similar to AVC, reducing efficiency of HEVC. Lower correlation between pixels limits compression gains produced by larger transform blocks. Streams such as BlueSky having significant uniform content in a frame produced 10% better gains compared to high activity streams like ParkJoy.
On the other hand, videos with stationary contents or low motion videos produced better gains as higher block sizes are chosen in inter frames for such contents.

4 Delay Configuration:

Video encoders can be configured to use different GOP structures based on application needs. Here, we are going to analyze compression gains of different delay configurations w.r.t AVC. Firstly, all intra frame cases (used for low delay, high quality, error robust broadcast application) produce nearly 30 – 35% gain deviating from 50% claims of HEVC. Higher block sizes are not very effective in intra frames and gains produced with the help of other new tools such as additional intra modes, SAO etc., limits the average gain to 35%. Hence HEVC-Intra will not be 50% better than AVC-Intra. It is also observed that, Low delay IPP GOP configuration produces slightly better gains (Approximately 3-4% on an average) compared to Random Access GOP configuration with B frames. This could be purely due to the implementation differences in HM reference software.
Thus 50% gains cannot be achieved for all video contents but many encoder implementations have proven nearly 50% or more average gains compared to AVC. Such compression efficiency can have a major impact on broadcast and OTT applications where half the bandwidth consumption can be reduced, thus half the costs. Though other emerging video compression standards such as VPx, Daala claim similar gains, they are yet to be proven. It would be interesting to see how VPx, Daala can change the future of HEVC. But right now one thing is sure, HEVC is the future video codec and it is here!
Pathpartner author Prashanth
Prashant N S
Technical Lead

Introducing ppinng!HDR for iOS

Today, PathPartner has released its camera app for iOS: ppinng!HDR, the app to capture high quality HDR images.
The auto exposure mode in your phone camera involves assessing the scene brightness to set an appropriate exposure value for image capture. But in vivid scenes, the detail would either be crushed into complete blackness or blown out due to overexposure.
When you click an image in the bright outdoors with an interesting object in the shadows, auto-exposure decreases exposure, dimming your subjects further. When you click an image in a dark room with a small window having a bird perched outside, the exposure is increased to clarify the indoors, saturating the bird out.
Have a look and check how our ppinng!HDR camera app will help you…
hdron_off
ppinng!HDR empowers your mobile camera with high dynamic range through an intelligent technique. A single click in ppinng!HDR captures multiple images of varying exposures. Our high dynamic range algorithm picks the pixels that matter from each of these images and composes it ,providing you the impression of a true-to-life HDR image.
The HDR feature in your stock camera app might mandate absolute steadiness of clasp during HDR capture. When the scene subjects move during captures, the image might also contain unpleasant ‘ghosted’ regions.
ppinng!HDR’s alignment technique intelligently adjusts images to compensate for shaky and wobbly hands. The ‘deghosting’ algorithm employed by ppinng!HDR smartly ensures that movement of scene subjects does not result in unpleasant ‘ghosting’ artifacts.
deghost
Apart from being faster and more powerful, we’ve made the app more beautiful, easy to use and allowing you to choose the quality of the image.

Below images shows HDR ON and HDR OFF

i2_trans

Screen Shots

Image 1
image 2
image 3

Lot more on our way!

Our mobile apps team is working to provide you with more updates! Stay tuned with us!

Get our ppinng!HDR Camera App NOW!

Download our App in Apple store

Our ppinng!HDR is also available in

Google Playstore and Windows Store
Pathpartner author Soujanya
Soujanya Rao
Executive Marketing Communications
Pathpartner author Narasimha
Narasimha Kaushik Narahari
Senior Software Engineer

Great news for the users of our Android App-  ppinng!HDR

ppinng!HDR empowers your mobile camera with high dynamic range through intelligent techniques.
We have launched our ppinng!HDR App with a new Zoom Feature.
The User Interface is simple and you can zoom in or Zoom out either by “pinch to zoom” action or by scrolling the Zoom button which is visible in the right hand side of the App.

Before

androidapp-before
 

After

androidapp-after
Our Mobile Team is working on providing you with more updates ! Stay Tuned.

Download our ppinng!HDR at

getit on android
Pathpartner author Soujanya
Soujanya Rao
Executive Marketing Communications

ppinng!HDR finally arrives on Windows Phone

A welcome news for all Windows Phone users, PathPartner Technology has announced ppinng!HDR on the windows platform. This app boasts all the latest features present on the Android version. It has been extensively optimized for both high and low end windows phone to provide seamless user experience.

The app now allows photos to be taken in 16:9 aspect ratio as apposed to 4:3 when the app was first launched

blog1
As announced in my previous blog the ppinng!HDR will get an auto HDR feature in the next update.
Kudos to the ppinng!HDR team for excellent effort, keep up the good work.

Download our ppinng!HDR at

windowsphone
Pathpartner author Idris Tarwala
Idris Iqbal Tarwala
Sr. VLSI Design Engineer, Blog Editor

Exciting news for ppinng!HDR users!

ppinng!HDR gets the most anticipated update yet! For all those who thought waiting for the counter to turn zero between subsequent clicks was time consuming will be ecstatic to welcome the new feature which allows the user to click the next shot immediately without the 3 second wait.
The HDR team has introduced extensive NEON optimization on the ARM Architecture to drastically reduce the image processing time and hence enhance the complete user experience. In this update, all the image processing is done for you in the background, so you never have to wait before taking another click. The Exposure of all devices is calibrated resulting in vivid images.
The App will soon get a cool update, which will add an Auto HDR mode and provide the user with information regarding which scene is suitable for HDR capture via onscreen real time indicator.
That’s all from us in this update. Watch out this space for more amazing updates on this app. Stay tuned for the “HDR Eye” in our next update!!

Download our ppinng!HDR at

getit on android
Pathpartner author Idris Tarwala
Idris Iqbal Tarwala
Sr. VLSI Design Engineer, Blog Editor