TensorFlow experts are building amazing models. Using the open source machine learning platform developed by the Google Brain Team, TensorFlow is making text-based, voice search, and facial recognition solutions possible for enterprises. TensorFlow can also be the basis for real-time object detection and other machine vision use cases in manufacturing. The TensorFlow models you create can, for example, power robotic guidance systems, facilitate efficient inspection and assembly verification solutions, enable smart inventory and asset control systems, and optimize machine performance. Moreover, they have the potential to help manufacturers improve product quality, automate repetitive tasks to operate more efficiently, and avoid downtime.
However, there’s a catch to delivering maximum value from the elegant TensorFlow models you create. You need to find a way to connect your TensorFlow model to the manufacturer’s machines so they actually do something.
Machine Vision with Automation is Complicated
Deploying TensorFlow models in real-life, industrial settings and getting them to trigger appropriate actions is a complex, multistep process. After you develop your model, you need to:
- Train it
- Deploy it
- Enable video streaming
- Develop an application that uses inference data to produce a desired output
- Create an appropriate user interface
- Enable PLC connectivity
Some of the most common hurdles in the process include:
1. Access to training data
A manufacturing operation can use IT and OT solutions from different vendors, which produce multiple data streams, often with data formatted in different ways. To train and test a TensorFlow model, however, you need access to all types of data your system uses so that your model works effectively in real-world operations. Further amplifying this challenge, an operation may use legacy equipment that produces vital data but doesn’t provide an easy way to connect to the network. You need a way to tie in all data streams and allow data to flow freely during model training and testing – as well as during ongoing operations.
2. Deploying TensorFlow models at the edge
In a real-life application, the best strategy may be to deploy your TensorFlow model via TensorFlow Lite in embedded, Internet of Things (IoT) or other edge devices. Machine learning capabilities at the edge can reduce latency and increase reliability, but you need to choose components for your system capable of withstanding harsh industrial environments while meeting all necessary processing and power requirements.
3. Converting inference data to automate and control equipment
Once your TensorFlow model is successfully trained, you also need to engineer an efficient way to automate equipment functions based on inference data, such as identifying a defect and sending a message to a conveyor’s controller to slow down or stop. Your model has to “talk” to industrial equipment.
The Best Way to Run TensorFlow on the Factory Floor
If your area of expertise is TensorFlow, it’s likely that your expertise is not in computer numerical control (CNC) machines. Any time a developer needs to acquire expertise to make a system work, there are two choices: Find new resources or partner with experts. When you are building a system to leverage TensorFlow for industrial use cases, partnering is probably the more expedient route, especially when you have the option of partnering with enterprises that live and breathe machinery and embedded AI. With ADLINK, it’s even in our name – Autonomous Devices LINKed. We’re ready to provide you with the tools you need to build, test, and deploy TensorFlow solutions faster.
We have edge solutions addressing common pain points developers encounter when making a TensorFlow model work for manufacturers. Our solutions enable you to:
- Record, capture and stream images from industrial cameras to train and test your model.
- Connect any camera from any vendor to use image data from multiple sources.
- Deploy your models at the edge, close to data sources, with one-touch deployment.
- Connect – out of the box — any of 150+ OT control systems.
You also have the advantage of working with a partner familiar with implementing TensorFlow models for manufacturing. For example, we‘ve seen partners have success accelerating systems by building a TensorFlow model but running it on Intel’s OpenVINO.
The ADLINK Data River™ connects and integrates a machine learning platform, neural net, machine vision system, the cloud neural, industrial cameras, and manufacturing equipment – streaming data anywhere it’s needed in real time.
Our turnkey ADLINK Edge™ machine vision AI solution includes all of the hardware and software necessary to connect cameras, assist with TensorFlow or other ML models and stream inference data in real time. This edge solution enables faster processing times, lower latency and control at the edge while lowering energy requirements, including preconfigured solutions – so no coding is necessary.
Hardware You Need to Run Your TensorFlow Model
Industrial edge hardware is a key component to running TensorFlow on the factory floor. Of course we have you covered with ready-to-deploy vision systems, frame grabbers and smart cameras for edge computing in rugged conditions. You’ll want to make sure you have flexibility between Intel, NVIDIA, Camera Link, GiGE, or analog standards support.
The machine vision model you build in TensorFlow has the potential to provide manufacturers with benefits such as automation, efficiency, visibility – and greater competitiveness. To deliver maximum value, however, the model must be a part of a total system designed for the operation and its environment and capable of working with manufacturing equipment, including legacy machines.