Artificial intelligence (AI) used to process IoT data at the edge is given the acronym AIoT. AMRs are a great example of leveraging AIoT for the high amounts of data processing required to operate. AMRs have the ability to operate all day, all night, and all year, stopping only for recharges and maintenance, while also potentially working together with humans. These features provide a wide range of advantages for an industrial, automation, or manufacturing setting and require AI to assure safety and security for both the humans and machines.
Packed with benefits, AMRs are complex to design and build; from the vision system (which require massive amounts of data processing alone), to navigation systems, motor control, wireless communications and human-machine real-time interaction, there is a lot to consider.

Providing Power to your AMR
While the complexities of building AMR systems can present many obstacles, you have to begin with one basic element—how to power the AMR. As previously mentioned, the robots need to operate for long periods of time before a recharge.
When considering how to power the AMR, we also need to choose a platform that is able to perform myriad tasks, such as NVIDIA’s Jetson, which is a system-on-module (SOM) with CPU, GPU, PMIC, DRAM, and flash storage. The platform can be coupled with the company’s Isaac robotics platform, which includes the Isaac SIM. That SIM is used for simulating and training robots in virtual environments, prior to deploying AMRs. Additionally, users will find the Isaac SDK, which is an open software framework for processing on the Jetson. It enables custom behaviors and capabilities that can accelerate robotics development.
NVIDIA provides a platform that serves as a great starting point for AMR development, it provides the needed control and computing efficiency for all aspects of the robot with excellent power efficiency, making it really stand out.
Sensors, Sensors, and More Sensors
It’s incredibly important to incorporate sensors within the AMRs. These sensors can include anything from 2D/3D cameras, time-of-flight sensors and LIDAR devices, among others, and the sensors must connect to high-speed I/Os. Of course, with an abundance of sensors comes an abundance of data. This data must be processed in real-time, on the robot, so it can autonomously navigate within hectic environments. It’s safe to assume that by integrating AI functionality, field upgrades can be minimized.
As NVIDIA’s director of product management for autonomous machines, Amit Goel said, “the simulation environment is a key component in the development and deployment of AMRs. Developing the actual hardware adds some constraints and affects how quickly AMRs can be designed, and frankly, how many people can be working on the AMR at the same time. Once you move the design into simulation, your development team can be anywhere.”
Now designers are presented with a new question: What type of processor should be deployed? There’s really no set-in-stone answer to this question, but the AMR definitely needs to accept, read, and calculate information with regard to its environment. It then needs to control the behaviors. A GPU is the most ideal. We also need a process that will offer enough flexibility to upscale, and even downscale. To hear more detail about designing AMRs using ADLINK Technology platforms and NVIDIA GPUs, check out this podcast featuring yours truly and Amit!