Video Quality Detection Using HeatMap

Video compression algorithms exploit the fact that high-frequency components are not effectively perceived by the human eye. By allocating bits differentially to various spatial frequency components, video compression reduces the image size with “barely” noticeable visual artifacts. This is achieved by dividing a picture into small blocks and coding their transform coefficients both differentially and efficiently. However if the video encoder compresses the input aggressively, these blocks can introduce artifacts which are visible to the human eye.
PSNR is one of the tools used to check the fidelity of the encoded video from the original input. The video encoders we are focusing here are lossy and computing PSNR does not highlight visually relevant mismatches. Despite these shortcomings PSNR is still a great filtering tool, to highlight areas in the reconstructed output for further investigation. At Pathpartner, we have developed a “PSNR HeatMap” tool, “to visualize block based PSNR”.

How PSNR HeatMap works:

PSNR HeatMap tool takes the reconstructed stream and original stream as input. It will compute block wise PSNR (block size is configurable). This blockwise PSNR is mapped to a color table, which has a progressive color gradient from green to red. Mapped color value will be assigned to Chroma Cr, while Chroma Cb is forced to zero for every output pixel. Forcing both Cb and Cr components to zero would give a green tinge to the rendered picture. As we change the Cr values from 0 to 0xFF, overlay tinge would range from green to red. Keeping Cb component zero would ensure no other colors get overlayed on the rendered picture. PSNR color mapping is such that, if PSNR value is greater than 52 would give a green tinge while PSNR less than 25 will give it a red tinge.
Below is the color scale that we use:
PSNR in bd
Code is available at https://github.com/pphevc/freetools/tree/master/psnrheatmap under apache license

Embedded Vision Application – Design approach for real time classifiers

Overview of classification technique

Object detection/classification is a supervised learning process in machine vision to recognize patterns or objects from data or image. It is a major component in Advanced Driver Assistance Systems (ADAS) as it is used commonly to detect pedestrians, vehicles, traffic signs etc.
Offline classifier training process fetches sets of selected data/images containing objects of interest, extract features out of this input and maps them to corresponding labelled classes to generate a classification model. Real time inputs are categorized based on the pre-trained classification model in an online process which finally decides whether the object is present or not.
Feature extraction uses many techniques like Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP) features extracted using integral image etc. Classifier uses sliding window, raster scanning or dense scanning approach to operate on an extracted feature image. Multiple image pyramids are used (as shown in part A of Figure 1) to detect objects with different sizes.

Computational complexity of classification for real-time applications

Dense pyramid image structures with 30-40 scales are also being used for getting high detection accuracy where each pyramid scale can have single or multiple levels of features depending upon feature extraction method used. Classifiers like AdaBoost, use random data fetching from various data points located in a pyramid image scale. Double or single precision floating points are used for higher accuracy requirements with computationally intensive operations. Also, significantly higher number of control codes are used as part of classification process at various levels. These computational complexities make classifier a complex module to be designed efficiently to achieve real-time performance in critical embedded systems, such as those used in ADAS applications.
Consider a typical classification technique such as AdaBoost (adaptive boosting algorithm) which does not use all the features extracted from sliding window. That makes it computationally less expensive compared to a classifier like SVM which uses all the features extracted from sliding windows in a pyramid image scale. Features are of fixed length in most of the feature extraction techniques like HOG, Gradient images, LBP etc. In case of HOG, features contain many levels of orientation bins. So each pyramid image scale can have multiple levels of orientation bins and these levels can be computed in any order as shown in part B and C of Figure 1.
pyramid image
Figure 1: Pyramid image scales with multiple orientation bins levels

Design constraints for porting classifier to ADAS SoC

Object classification is generally categorized as a high level vision processing use case as it operates on extracted features generated by low and mid-level vision processing. It requires more control codes by nature as it involves comparison process at various levels. Also, as mentioned earlier it involves precision at double/float level. These computational characteristics depict classification as a problem for DSP rather than a problem for vector processors which has more parallel data processing power or SIMD operations.
Typical ADAS processors, such as Texas Instruments’ TDA2x/TDA3x SoC, incorporate multiple engines/processors targeted for high, mid and low level vision processing. TMS320C66x DSP in TDA2x SoC has fixed and floating-point operation support, with maximum 8-way VLIW to issue up to 8 new operations every cycle, with SIMD operations for fixed point and fully-pipelined instructions. It has support for up to 32, 8-bit or 16-bit multiplies per cycle, up to eight, 32-bit multiplies per cycle. EVE processor of TDA2x has 512-bit Vector Coprocessor (VCOP) with built-in mechanisms and vision-specialized instructions for concurrent, low-overhead processing. There are three parallel flat memory interfaces each with 256-bit load-store memory bandwidth providing a combined 768-bit wide memory bandwidth. Efficient management of load/store bandwidth, internal memory, software pipeline, integer precisions are major design constraints for achieving maximum throughput from these processors.
Classifier framework can be redesigned/modified to adapt to vector processing requirements thereby processing more data in one instruction or achieving more SIMD operations.

Addressing major design constraints in porting classifier to ADAS SoC

Load/Store bandwidth management

Each pyramid scale can be rearranged to meet limited internal memory. Functional modules and regions can be selected and arranged appropriately to limit DDR load/store bandwidth to the required level.

Efficient utilization of limited internal memory and cache

Image can be processed on optimum sizes and memory requirements should fit into hardware buffers for efficient utilization of memory and computation resources.

Software pipeline design for achieving maximum throughput

Some of the techniques that can be used to achieve maximum throughput from software pipelining are mentioned below.
  • Loop structure and its nested levels should fit into the hardware loop buffer requirements. For example C66x DSP in TDA3x has restrictions over its SPLOOP buffer such as initiation interval should be
  • Unaligned memory loads/store should be avoided as it is computationally expensive and its computation cycles are twice compared to aligned memory loads/stores in most of the cases.
  • Data can be arranged or compiler directives and options can be set to get maximum SIMD load, store and compute operations.
  • Double precision operations can be converted to floating point or fixed point representations but retraining of offline classifier should be done upon these precision changes.
  • Inner most loop can be made simple without much control codes to avoid register spilling and register pressure issues.
  • Division operations can be avoided with corresponding table loop up multiplication or inverse multiplication operations.

Conclusions

Classification for real-time embedded vision applications is a difficult computational problem due to its dense data processing requirements, floating point precision requirements, multilevel control codes and data fetching requirements. These computational complexities involved in classifier design limits its vector processing power significantly. But classifier framework on a target platform can be redesigned/modified to leverage platform architecture vector processing capability by efficiently utilizing techniques such as load/store bandwidth management, internal memory and cache management and software pipeline design.

References:

TDA2X, A SOC OPTIMIZED FOR ADVANCED DRIVER ASSISTANCE SYSTEMS Dr. Jagadeesh Sankaran, Senior Member Technical Staff, Texas Instruments Incorporated. Dr. Nikolic Zoran, Member Group Technical Staff, Texas Instruments Incorporated. 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)
Pathpartner author Sudheesh
Sudheesh TV
Technical Lead
Pathpartner author Anshuman
Anshuman S Gauriar
Technical Lead

How to build Angstrom Linux Distribution for Altera SoC FPGA with OPEN CV & Camera Driver Support

This blog will guide you through the steps to build Linux OS with OpenCV & Camera Driver support for Altera SoC FPGA.
If your Real time Image processing applications like Driver Monitoring system on SoC FPGA’s are dependent on Open CV, you have to develop Open CV build environment on the target board. This blog will guide you through the steps to build Linux OS with OpenCV & Camera Driver support for Altera SoC FPGA.
In order to start building the first Linux distribution for the Altera platform, you must first install the necessary libraries and packages. Follow the Initialization steps for setting up the PC host
The required packages to be installed for Ubuntu 12.04 are
$ sudo apt-get update $ sudo apt-get upgrade $ sudo apt-get install sed wget cvs subversion git-core coreutils unzip texi2html texinfo libsdl1.2-dev docbook-utils gawk python-pysqlite2 diffstat help2man make gcc build-essential g++ desktop-file-utils chrpath libgl1-mesa-dev libglu1-mesa-dev mercurial autoconf automake groff libtool xterm *

IMPORTANT:

Please note that * indicates that the command is one continuous line of text. Make sure that the command is in one line when you are pasting it.
If the host machine runs 64 bit version of the OS, then you need to install the following additional packages:
$ sudo apt-get install ia32-libs
On Ubuntu 12.04 you will also need to make /bin/sh point to bash instead of dash. You can accomplish this by running: and selecting ‘No’.
sudo dpkg-reconfigure dash
Alternatively you can run:
sudo ln -sf bash /bin/sh
However, this is not recommended, as it may get undone by Ubuntu software updates.

Angstrom Buildsystem for Altera SOC: (Linux OS)

Download the scripts needed to start building the Linux OS. You can download the scripts for the angstrom build system from https://github.com/altera-opensource/angstrom-socfpga/tree/angstrom-v2014.12-socfpga
Unzip the files to angstrom-socfpga folder:
$ unzip angstrom-socfpga-angstrom-v2014.12-socfpga.zip –d angstrom-socfpga $ cd angstrom-socfpga
These are the setup scripts for the Angstrom buildsystem. If you want to (re)build packages or images for Angstrom, this is the thing to use.
The Angstrom buildsystem uses various components from the Yocto Project, most importantly the Openembedded buildsystem, the bitbake task executor and various application/ BSP layers.
Navigate to the source folder, and comment the following line in the layer.txt file
$ cd source $ gedit layers.txt & meta-kd4,https://github.com/Angstrom-distribution/metakde. git,master,f45abfd4dd87b0132a2565499392d49f465d847 * $ cd .. (Navigate back to the head folder)
To configure the scripts and download the build metadata
$ MACHINE=socfpga_cyclone5 ./oebb.sh config socfpga_cyclone5
After the build metadata, you can download the meta-kde4 from the below link and place it in the sources folder, as this was earlier disabled in the layers.txt file http://layers.openembedded.org/layerindex/branch/master/layer/meta-kde4/
Source the environment file and use the below commands to start a build of the kernel/bootlaoder/rootfs:
$. /environment-angstrom $ MACHINE=cyclone5 bitbake virtual/kernel virtual/bootloader console-image
Depending on the type of machine used, this will take a few hours to build. After the build is completed the images can be found in:
Angstrom-socfpga/deploy/cyclone5/
After the build is completed, you will find the u-boot, dtb, rootfs and kernel image files in the above folder.

Adding the OPENCV Image to the rootfs:

To add OpenCV to the console image(rootfs) we need to modify the local.conf file in the conf folder:
$ cd ~/angstrom-socfpga/conf $ gedit local.conf &
In the local.conf file navigate to the bottom of the file and add the following lines and save the file:
IMAGE_INSTALL += “ opencv opencv-samples opencv-dev opencv-apps opencv-samples-dev opencv-static-dev “
Then build the console image again using the following command:
$ cd .. $ MACHINE=cyclone5 bitbake console-image
After the image is built the rootfs will contain all necessary OpenCV libs for development and running opencv based applications.

Enabling Camera Drivers in the Kernel :

The Linux Kernal v3.10 has an in built UCV camera driver which supports a large number of USB cameras. In order to enable it, you need to configure the kernel using the menuconfig option:
$ MACHINE=cyclone5 bitbake virtual/kernel –c menuconfig
The above command opens a config menu window. From the menuconfig window enable the following to enable UVC:
Device Drivers Multimedia support Media USB Adapters [*] USB Video Class [*] UVC input events device support [*]
Save and exit the config menu then execute the following command:
$ MACHINE=cyclone5 bitbake virtual/kernel
The new kernel will be build with the UVC camera drivers enabled and will be available in the /deploy/cyclone5 folder.
For the camera to work, the coherent pool must be set to 4M, this can be done as follows:-

U-Boot Environment Variables

Boot the board, pressing any key to stop at U-Boot console. The messages dispayed on the console will look similar to the following listing:
U-Boot SPL 2013.01.01 (Jan 31 2014 – 13:18:04) BOARD: Altera SOCFPGA Cyclone V Board SDRAM: Initializing MMR registers SDRAM: Calibrating PHY SEQ.C: Preparing to start memory calibration SEQ.C: CALIBRATION PASSED ALTERA DWMMC: 0   U-Boot 2013.01.01 (Nov 04 2013 – 23:53:26)   CPU : Altera SOCFPGA Platform BOARD: Altera SOCFPGA Cyclone V Board DRAM: 1 GiB MMC: ALTERA DWMMC: 0 In: serial Out: serial Err: serial Net: mii0 Warning: failed to set MAC address   Hit any key to stop autoboot: 0 SOCFPGA_CYCLONE5 #

Configuration of U-Boot Environment Variables

SOCFPGA_CYCLONE5 #setenv bootargs console=ttyS0,115200 vmalloc=16M coherent_pool=4M root=${mmcroot} rw rootwait;bootz ${loadaddr} – ${fdtaddr} *

Save of U-Boot Environment Variables

SOCFPGA_CYCLONE5 #saveenv

Boot Kernel

SOCFPGA_CYCLONE5 #boot
Following all the above guidelines, you should be able to build Angstrom Linux Distribution for Altera SoC FPGA with OPEN CV & Camera Driver Support. This build was successfully implemented on Altera Cyclone V SoC.
Pathpartner author Idris Tarwala
Idris Iqbal Tarwala
Sr. VLSI Design Engineer

Introducing ppinng!HDR for iOS

Today, PathPartner has released its camera app for iOS: ppinng!HDR, the app to capture high quality HDR images.
The auto exposure mode in your phone camera involves assessing the scene brightness to set an appropriate exposure value for image capture. But in vivid scenes, the detail would either be crushed into complete blackness or blown out due to overexposure.
When you click an image in the bright outdoors with an interesting object in the shadows, auto-exposure decreases exposure, dimming your subjects further. When you click an image in a dark room with a small window having a bird perched outside, the exposure is increased to clarify the indoors, saturating the bird out.
Have a look and check how our ppinng!HDR camera app will help you…
hdron_off
ppinng!HDR empowers your mobile camera with high dynamic range through an intelligent technique. A single click in ppinng!HDR captures multiple images of varying exposures. Our high dynamic range algorithm picks the pixels that matter from each of these images and composes it ,providing you the impression of a true-to-life HDR image.
The HDR feature in your stock camera app might mandate absolute steadiness of clasp during HDR capture. When the scene subjects move during captures, the image might also contain unpleasant ‘ghosted’ regions.
ppinng!HDR’s alignment technique intelligently adjusts images to compensate for shaky and wobbly hands. The ‘deghosting’ algorithm employed by ppinng!HDR smartly ensures that movement of scene subjects does not result in unpleasant ‘ghosting’ artifacts.
deghost
Apart from being faster and more powerful, we’ve made the app more beautiful, easy to use and allowing you to choose the quality of the image.

Below images shows HDR ON and HDR OFF

i2_trans

Screen Shots

Image 1
image 2
image 3

Lot more on our way!

Our mobile apps team is working to provide you with more updates! Stay tuned with us!

Get our ppinng!HDR Camera App NOW!

Download our App in Apple store

Our ppinng!HDR is also available in

Google Playstore and Windows Store
Pathpartner author Soujanya
Soujanya Rao
Executive Marketing Communications
Pathpartner author Narasimha
Narasimha Kaushik Narahari
Senior Software Engineer

Great news for the users of our Android App-  ppinng!HDR

ppinng!HDR empowers your mobile camera with high dynamic range through intelligent techniques.
We have launched our ppinng!HDR App with a new Zoom Feature.
The User Interface is simple and you can zoom in or Zoom out either by “pinch to zoom” action or by scrolling the Zoom button which is visible in the right hand side of the App.

Before

androidapp-before
 

After

androidapp-after
Our Mobile Team is working on providing you with more updates ! Stay Tuned.

Download our ppinng!HDR at

getit on android
Pathpartner author Soujanya
Soujanya Rao
Executive Marketing Communications

ppinng!HDR finally arrives on Windows Phone

A welcome news for all Windows Phone users, PathPartner Technology has announced ppinng!HDR on the windows platform. This app boasts all the latest features present on the Android version. It has been extensively optimized for both high and low end windows phone to provide seamless user experience.

The app now allows photos to be taken in 16:9 aspect ratio as apposed to 4:3 when the app was first launched

blog1
As announced in my previous blog the ppinng!HDR will get an auto HDR feature in the next update.
Kudos to the ppinng!HDR team for excellent effort, keep up the good work.

Download our ppinng!HDR at

windowsphone
Pathpartner author Idris Tarwala
Idris Iqbal Tarwala
Sr. VLSI Design Engineer, Blog Editor

Exciting news for ppinng!HDR users!

ppinng!HDR gets the most anticipated update yet! For all those who thought waiting for the counter to turn zero between subsequent clicks was time consuming will be ecstatic to welcome the new feature which allows the user to click the next shot immediately without the 3 second wait.
The HDR team has introduced extensive NEON optimization on the ARM Architecture to drastically reduce the image processing time and hence enhance the complete user experience. In this update, all the image processing is done for you in the background, so you never have to wait before taking another click. The Exposure of all devices is calibrated resulting in vivid images.
The App will soon get a cool update, which will add an Auto HDR mode and provide the user with information regarding which scene is suitable for HDR capture via onscreen real time indicator.
That’s all from us in this update. Watch out this space for more amazing updates on this app. Stay tuned for the “HDR Eye” in our next update!!

Download our ppinng!HDR at

getit on android
Pathpartner author Idris Tarwala
Idris Iqbal Tarwala
Sr. VLSI Design Engineer, Blog Editor

ppinng!HDR gets a Speed Boost

Today we are rolling out a significant update to ppinng!HDR that promises to take HDR to the next level. We have been hard at work the past few weeks to make your HDR experience smoother and faster. After many rounds of optimization, we are proud to report that HDR processing time is down by almost 50% on ARM architecture. Several bugs have been squashed and feature enhancements have been rolled in at all the stages of our algorithm – this has resulted in Image Quality improvements across the board.
With this update we’d also like to introduce you to our instagram account over at http://instagram.com/ppinnghdr. Check out our sample images and send us yours! We’d love to see a slice of the world through ppinng!HDR.
collage_20140515234114794
collage_20140519172249630
collage
And we aren’t resting just yet. There’s still work to be done and we have a whiteboard full of ideas that will find their way into the app in coming releases. Keep watching this space for more.
Pathpartner author Akshay
Akshay Panday
Technical Lead

ppinng!HDR – Simple to use, Best in results!

Let me begin with a simple question. What features you want in your mobile phone? Just think for a moment. Almost all of us would have written down “Camera” in the features list, isn’t it? Camera has become such an important part of our life and a must feature in our mobile phones. And when some feature is important to such an extent, it is normal to expect that the feature should be simple to use and the best in quality.
From the day, the above scenario was realised, there has been a mad race by companies and individuals alike, to make the camera and the camera related services, better and simpler. Initially the advancements was limited to hardware. But with the advent of smart phones which have processing speed, storing capacity competing with desktops, the betterment of software or the camera applications has taken an important role in the betterment of services.
PathPartner has brought out a very good HDR camera application called “ppinng!HDR”. This is an amazing application which provides a seamless HDR functionality on any mobile device running Android version 4.0 or above.
First lets try to know what HDR or High Dynamic Ranging is? The human eyes are sensitive across a higher dynamic range of intensities and can adjust with the dim as well as bright light, to give a better view; but cameras (despite Auto Exposure!) can’t. This is because auto exposure adjusts the exposure to bring most pixels of the image close to normal intensity; wiping out details from small, albeit interesting regions illuminated differently from the majority. Our eye has an intensity dynamic range of 10,000:1 whereas the best of cameras are restricted to a max of 1500:1!
So how does ppinng!HDR helps us to get a great quality picture? A single click in ppinng!HDR triggers capturing of multiple images which are of varying exposures. The high dynamic range algorithm composes the pixels that matter, from each of these images into a single HDR image. Now lets see some of the highlights which are making the application receive rave reviews!
  1. Motion of objects and unrealted people in the scene is no longer a cause for worry. The deghosting algorithm intelligently takes care of this!
  2. Scene ambience can be adjusted to suit your mood.
  3. All the differently exposed images can be saved if you wish to.
  4. Take picture through touch (With timer to hold the phone steady.)
  5. Switch on/off HDR mode as per your wish.
  6. Choose your camera source.
  7. Choose quality and size of image.
Update: ppinng!HDR has been downloaded over 20000 times! And received great feedback from about 250 reviewers! Check out what few of them say.Coenraad says “PERFECT MINIMALISTIC DESIGN Just wishing for a feature to use the volume rocker as the camera trigger instead of a screen tap for phones which don’t have a hardware key”
Rekha gave feedback as “Good but needs bit more stability. Zooming option is not supported. Please add this option in upcoming version, Thanks PP!!!”
Vlaad writes “LIKE LIKE LIKE I like this a lot. Working great especially in indoor situations. What I dont like is that shutter sound could not be disabled. REVIEW UPDATE: Magnificent. No shutter and quality is insane. BRILLIANT JOB”
From the reviews we understand we are doing great but, there is room for improvement too. Thank you everyone for your feedback. We are closely monitoring the feedback and improvising. But from the reviews, you can see that there has been a newer version released! Scroll down to check the upgrades!
  1. New easy and intuitive user interface.
  2. Performance optimisation
  3. Fixed Win-Death issues in many phone models.
  4. Added features like
    • Enable/Disable shutter sound
    • Choose image save path
    • Share photos with social media
    • Rating the application
screenshot2
Image 1: Version 1.0
Screenshot_2014-04-14-14-14-30
Image 3
v4_3_1
Image 2: Version 2.0
v4_22
Image 4
We can easily see the contrast between version1.0 and version2.0 . The new UI gives more viewing space.Also the option bar opens up, only when clicked upon the arrow mark.
In images 3 and 4 we see, some of the newer options. “Save original Images” and “Shutter Sound” options are viewed in Image 3. And in Image 4, on the lower right corner you have the sharing option.
Now enough has been said about the application. Download this amazing app by clicking here and do rate and review the application. It will be great to have feedback from you all.
Pathpartner author Varun
Varun Joshi
Software Engineer