Embedded Insiders Podcast: where AI meets the edge of IoT

Talk of AI at the edge, and the benefits that come with edge AI technology, are actually not that new. We may have added the letter “A” in front of IoT, for what we now refer to as AIoT (which really describes the overall environment when implementing edge AI technology), but the engineering struggle remains: “it’s harder than it sounds”. We’re on a mission to change this phrase to, “it’s easier with ADLINK”. Let’s talk. 

Full Podcast Episode:

https://www.buzzsprout.com/1734285/8162746-what-you-need-to-know-about-edge-ai.js?container_id=buzzsprout-player-8162746&player=small

There is a big difference between AI at the edge compared to AI in the cloud. Data is generated at the edge; hence, using AIoT, the intelligence and action all occur in that same location. This is vital because some applications can’t spare the time to wait for data to go back and forth to the cloud- it’s a millisecond difference between a production line going down or an autonomous vehicle collision.  Obviously if the data doesn’t need to travel anywhere, quicker responses can be achieved. 5G makes this far more maintainable, but not necessarily fast enough for applications like autonomous vehicles or medical devices. 

Edge AI platform designers have some difficult decisions to make when it comes to choosing amongst the available processor options for computing AI at the edge. They have to choose between CPUs, GPUs, and VPUs; and as if this wasn’t enough, Google is including TPUs (Tensor processing units) into the already abundant collection of processing units. 

The different processor types are all special in their own way. For example, a CPU can handle sequential, heavy-duty operations. On the opposite end, a GPU works better with small tasks in parallel. However, the resources need to be in sync. This is why coupling CPUs with GPUs – a heterogeneous computing approach– is the ideal solution for AIoT equipment at the edge. 

While there is no such thing as a one-size-fits-all edge AI solution, there are four questions to ask to determine if an edge AI solution is right for a particular design: 

  1. What’s the algorithm you need to run? 
  1. What kind of performance is needed? 
  1. How fast do you need a response? 
  1. What’s your design budget? 

There’s certainly a great deal of trial-and-error analysis that’s needed to find the appropriate processor combination for your AI application. Selecting the right edge computing vendor to help you meet the right size, weight, and performance (SWaP) constraints to support your edge AI application will determine your project success. And if you’re reading this blog, you have come to the right place! 🙂

Here at ADLINK, we have the ability to differentiate how processor combinations interact when it comes to a number of factors, including deeply embedded AI algorithms, to guide you through the edge AI design process. It’s crucial to get the combinational aspect, for the safety of people and health of revenue-generating applications within heavy machinery, automation and manufacturing. 

Listen to the full Embedded Insiders Podcast episode and let me know what you think! Also highly recommend registering for NVIDIA #GTC21, the conference is virtual and free this year with plenty of on-demand sessions on related topics- see you there?

Author: Zane Tsai
Author: Zane Tsai

Director of Platform Product Center, Embedded Platforms & Modules, ADLINK Technology

%d bloggers like this: