Edge AI hardware: How chipmakers are redefining architectures and accelerating market adoption

Date: June 18, 2020

Author: Sushmita Sharma

Introduction

The explosion of Artificial Intelligence is ushering a new era for semiconductor manufacturers, bringing newer opportunities and challenges. While AI tasks have traditionally been handled in mega data centers and server-grade platforms, there is a gradual shift towards edge computing. Applications such as autonomous driving, where a moment’s delay in decision making could be fatal, are paving way for AI on the edge. Access control applications, where the privacy of data is pivotal, rely on edge processing rather than cloud-based processing.

Putting AI on the edge (and the end-points) is imminent

According to a recent report by Markets and Markets, the edge AI hardware market is expected to register a shipment of 610 million units in 2019 and is likely to reach 1559.3 million units by 2024, at a CAGR of 20.64% during the forecast period.

What is really driving this growth?

  1. Need for reliable real-time performance: A lot of applications require AI on the devices due to the reliability and latency constraints of remote computing. Imagine an autonomous car unable to make a decision due to a lack of connectivity to the remote servers. This could be fatal. A medical robot, that is expected to help patients, receiving the command a tad bit late on how to assist a patient in a life-threatening situation.
  2. Concerns over data security: Applications, where data privacy and security are of utmost importance, depend on processing on the edge to avoid any misuse of information during transmission or otherwise. For example, access control systems in high-security enterprises may not want to store the data on public cloud networks. An edge installation would be very much needed.
  3. Cost of data transmission: Cost is another factor. Applications that need to process huge amounts of data regularly would incur skyrocketing costs for network transmission. Processing on the edge could potentially minimize those costs
  4. Innovations in silicon technology: Finally, innovations in silicon technology have paved the way for specialized AI hardware that is physically smaller, consumers low power, generates less heat, and delivers the horsepower needed for the most intensive algorithms.

All of the above factors combined have given the impetus to the semiconductor industry to adopt a new phase of growth – through the growth of AI chipsets and associated software.

Does AI on the edge need specialized hardware and chip architectures?

To answer this question, we first need to understand how AI solutions work. Typically ML and AI solutions work in two phases – a training phase where the neural networks are trained to assign ideal weights to the variables and an inference phase where the neural networks use the models and weights to arrive at the output. Both of these processes are compute-intensive but their operational requirements differ. While training is expected to be a one-time process to develop the final network, the inference is what is required to run real-time for a vast majority of applications.

The process of training these complex neural network architectures using mammoth sets of data and inferring meaningful information involves huge computational requirements. This in turn requires highly evolved and specialized Hardware.

Traditionally a lot of AI processing was happening in data centers. GPUs, conventionally used for graphics processing, were fine-tuned for high-performance computing for these applications. FPGAs, CPUs, DSPs, and finally ASICs started paving their way in the AI value chain. Today Intel’s Xeon architecture, NVIDIA’s GPUs, and Google’s TPUs are the backbone of any AI implementation.

Interested in learning more about machine learning and AI hardware. Read our whitepaper on “Understanding performance analysis and comparative study of different AI hardware”.

Download Whitepaper

However, AI at the edge has a more stringent set of requirements. Processing power, processing speed, speed/quantum of memory access, power consumption, physical size, and the cost: these have become way more prominent. “Performance per watt” is a trending term in AI circles that has become one of the key benchmarking criteria for AI hardware.

Consider AI architecture for a smart camera application v/s that for a constrained IoT device.

  • AI chip for a constrained IoT device: In all likelihood, AI chip for a constrained, battery-powered IoT device would need to run on a very thin power budget, say of 100micro-watts, making minimizing power consumption one of the key priorities. Performing AI on the edge would also save power associated with RF transmission of data to the cloud. Companies developing AI chips for IoT end-point devices would need to make sure that their chip delivers the lowest power consumption for these specific applications.

  • AI chip for a smart camera: An on-premise smart camera, on the other hand, would need to handle image processing on ultra-HD content at 30 or 60fps and hence would have different requirements for performance, power, and memory requirements making the case for specialized hardware again.

To address these diverse requirements, chipmakers are reinventing custom chip architectures that can deliver optimized performance for specialized applications. From massively parallel architectures to optical computing and from in-memory computing to higher on-chip memory, silicon companies are using different techniques to deliver that elusive and strongly contended “performance per watt” metric.

While established silicon vendors such as Qualcomm, Intel, NVIDIA, Mediatek, and others have customized their hardware for AI applications, there is an explosion of start-ups that are building specialized AI chipsets to address the diverse needs to edge AI market.

How chip makers can accelerate their Edge AI chip adoption?

AI hardware solutions are only useful if they can fit into the overall AI ecosystem of tools, frameworks, and other software. AI chipmakers face a long path to monetization. AI solution providers expect a whole bunch of software tools and verticalized solutions that can help them build trust in the technology and pave way for a faster time to market. The onus largely lies on chip makers to provide those tools and references. The four key areas that AI-chip makers need to focus on, to driver faster adoption include:

  1. Proving hardware performance: Processing speed, processing per watt, chip size and form factor, interfaces, on-chip memory, cost – the vastness and importance of metrics on which hardware performance is measured cannot be undermined. One of the most definitive ways of proving hardware performance is by performing benchmarking tests using standard networks and datasets. For example, MLPerf framework has become one of the most sought-after benchmarks in the AI industry. It provides an industry-standard mechanism for measuring the training and inference performance of ML hardware.
  2. Availability of software toolchain and compatibility with standard frameworks: The second pillar driving faster adoption of AI chips is built on the foundation of strong software support around the hardware. A great performing piece of hardware without compatible software tools and frameworks is as good as a luxury car without four wheels – you can’t drive it anywhere. From compilers and emulators to kernel libraries and resource allocators and from qauntizers and static analyzers to profilers and equivalence checkers, the list of software toolchains needed to harness the power of AI chipsets is long. Add to it the fact that all software needs to be compatible with industry-standard frameworks, and chip makers need to move massive mountains to build a software ecosystem that can allow solution providers to rapidly adopt their chipsets.
  3. Verticalized solution stack: An AI architecture could be targeted towards both autonomous vehicles and industrial robotics markets. While the chip architecture may stay the same, the interfaces, connectivity options, memory requirements, and several other factors may vary. A chipmaker may need to have variants of development or reference boards and dedicated algorithms support – either in-house or through an ecosystem of partners. Having the proof-of-concepts already developed would speed-up the development cycles and production phases.

Conclusion

The last couple of years have seen an explosion in end-node and edge AI chipsets. While the established semiconductor players are well entrenched in the AI ecosystem, recent years have seen a rapid rise in the number of AI chip start-ups that are eyeing to capture a piece of the exploding AI value pie. While the opportunity that edge AI presents is huge, there are several challenges that chip companies need to overcome to drive faster adoption to the market. The future, in a large part, depends on the ability of these chipmakers to holistically address these challenges and drive innovation in bringing AI to the edge devices.

PathPartner works with AI chip makers in accelerating their market adoption. Please reach out to us at marcom@pathpartnertech.com or know more here

Further Reading

You might also like these blogs

post
Modify the iMX8 Android device to a USB Webcam

USB Webcam market size is estimated to be around USD…

Read More
post
Three Reasons Why Driver Monitoring Solution Is A Must For Fleets

Chasing scenes in action movies are the best! Watching the…

Read More
post
Android Binder IPC Framework | Scatter-Gather Optimization

Android implementation is using a framework-binder IPC for inter process…

Read More
Back to Top