CANADA'S LEADING INFORMATION SOURCE FOR THE METALWORKING INDUSTRY

LATEST MAGAZINE

CANADA'S LEADING INFORMATION SOURCE FOR THE METALWORKING INDUSTRY

CANADA'S LEADING INFORMATION SOURCE FOR THE METALWORKING INDUSTRY

Deploying AI at the edge: from operation to automation

Share This Post
Reducing the time for decisions and reducing the expense of data movement are two of the big reasons why companies are now deploying AI technologies at the edge. PHOTO courtesy Kuka.

By Keith Shaw, Association for Advancing Automation (A3)

As companies increase the use of automation technologies in their factories, warehouses and other locations, they recognize the need for artificial intelligence technologies (such as machine vision to inspect for defects) that can guide decisions in real time. But in practice, companies are finding that cloud-based AI technologies are taking too long, and they need to move this decision-making process to the edge of the network.

“We often see industrial use cases where manufacturers or OEMs tell me that they have to make that entire round trip, including the network, in a very small number of milliseconds,” says Rita Wouhaybi, Senior AI Principal Engineer at Intel. “It makes it impossible against the law of physics to actually send that request to the cloud.”

Reducing the time for decisions and reducing the expense of data movement are two of the big reasons why companies are now deploying AI technologies at the edge. These issues were recently discussed in the latest A3 webinar, “Artificial Intelligence and Machine Vision: Moving from Operation to Automation”, a part of the Intelligent Edge webinar series sponsored by Intel.

Wouhaybi was joined in the webinar by Michael Huether, AI Solution Architect at Intel, and Tuuli Ahava, Director, Application Program, Digital Automation at Nokia. The panelists discussed key areas and use cases where adding AI at the edge is critical and also noted scenarios where cloud computing could still be used for AI processing and data crunching.

Another example that favors edge AI processing is when companies have lots of data to digest, such as multiple camera feeds looking for defects or worker safety to protect employees from machinery that could injure them. Large data sets become cost prohibitive when that data needs to move to the cloud. In addition, many companies don’t have the communication network bandwidth to support such large data sets.  A third reason for using edge AI would be to ensure the data stays in the country or region due to privacy or intellectual property requirements.

Picking the right team

The panel also dove into a question about building the right team for utilizing edge and AI, and who needs to be involved. When thinking about the network piece, Nokia’s Tuuli Ahava suggested that companies begin by having folks from the IT department teaming up with the OT group. “Some years back, we were discussing the convergence of IT and OT, but now it’s really happening,” says Ahava. “Naturally we also need people to develop the algorithms and some data scientists, but I would start with the simple answer, ‘Hey, let’s put IT and OT around the table’ to get them together.”

Additional stakeholders that should be included in edge AI discussions include subject matter experts for the process being automated (including factory floor workers who would monitor the processes), systems integrators and application creators in addition to any data scientist teams. Everyone on the panel agreed that data science needs to be brought to the level of every employee, instead of relying on specific data scientists to explain or operate everything.

Using microservices to break down complexity

The panel also spoke about the use of modular building blocks, through microservices, for edge-based applications. Companies that have used microservices in their cloud environment can use that experience – being able to change small parts of applications instead of completely rewriting them – in edge environments.

“Monolithic applications are a bit like dinosaurs,” says Intel’s Michael Huether. “They work for a while, but then at some point you are at the end, and you’d like to avoid this because your manufacturing line should never stop. The journey is going on, and you want to do a little modification without rebuilding everything.”

Huether added that microservices also provide companies with flexibility on where they place applications, whether directly on an industrial PC running directly beside the data-generating equipment, or whether through a low-latency, high bandwidth network such as 5G to connect to small microservices in the cloud. “If you can consolidate it and reuse it all together on one machine that’s an advantage, but you need to have the flexibility,” Huether says.

The panel also covered several additional topics in the webinar, including:

  • The importance of choosing the right data for AI processing, and how it needs to be cleaned before it gets processed;
  • Always being aware of security issues (including privacy regulations) that should surround all edge AI projects;
  • Examples and use cases where the edge might not be needed, such as business intelligence and non-timely scenarios, or scenarios where a hybrid of edge and cloud can be used;
  • Key advice on working with partners and suppliers, to make sure they are up to speed on edge technologies.

To watch the full webinar, click here to view it on demand.

About A3: For nearly five decades, the Robotic Industries Association (RIA), AIA-Advancing Vision + Imaging (AIA), and the Motion Control and Motors Association (MCMA), along with A3 Mexico, have played a key role in helping automation technologies become among the most critical tools of the 21st Century. As these technologies have converged, our association had a convergence of its own. It is now the Association for Advancing Automation (A3), one trade group for the entire automation ecosystem.

Share This Post

 

Recent Articles




Wordpress Social Share Plugin powered by Ultimatelysocial
error

Enjoy this post? Share with your network