Researchers create an algorithm that maximizes the inference accuracy of IoT sensors using edge computing
We are in a fascinating era where even low-resource devices, such as Internet of Things (IoT) sensors, can use deep learning algorithms to solve complex problems. such as image classification or natural language processing (the branch of artificial intelligence that processes computers the ability to understand spoken and written language like humans).
However, study carefully in IoT sensors may not meet quality of service (QoS) requirements such as inference accuracy and latency. With the exponential growth of data collected by billions of IoT devices, there is a need to shift to a distributed model where some computation occurs at the edge of the network (edge computing), closer to where data is generated rather than sending it to the cloud for processing and storage.
IMDEA Networks researchers Andrea Fresa (PhD Fellow) and Jaya Prakash Champati (Research Assistant Professor) conducted a study in which they presented the AMR² algorithm, which makes use of the base edge computing infrastructure (processing, analyzing, and storing data closer to where it was created for faster, closer activation) real-time analysis and feedback) to increase IoT sensor inference accuracy while observing latency constraints and showing that the problem is resolved. The paper “A Offload Algorithm to Maximize Edge Device Inference Accuracy in an Edge Intelligent System” was published this week at the MSWiM conference.
To understand what inference is, we must first explain that machine learning works in two main phases. The first refers to training when a developer feeds their model with a curated set of data so that it can “learn” everything it needs to know about the type of data it will analyze. The next stage is inference: the model can make predictions based on actual data to produce actionable results.
In their publication, the researchers concluded that inference accuracy increased by up to 40% when comparing the AMR² algorithm with basic scheduling techniques. They also found that an efficient scheduling algorithm is essential to correctly support edge machine learning algorithms.
“The results of our study could be extremely useful for Machine Learning (ML) applications that need fast and accurate inference on end devices. For example, think of a service like Google Photos to classification of image elements We can guarantee execution latency using Andrea Fresa explains the AMR² algorithm, which can be very effective for a developer who can use it in design to make sure that the delay is not visible to the user,” explains Andrea Fresa.
The main obstacle they faced in carrying out this study was proving the theoretical performance of AMR² algorithm and validate it using a testbed consisting of a Raspberry Pi and a server connected via LAN. “To demonstrate the performance limits of AMR², we used basic ideas from linear programming and tools from research activities“highlight Fresa.
With this work, however, the IMDEA Networks researchers have laid the groundwork for future studies to make it possible to run it. machine learning (ML) applications at the edge of the network quickly and accurately.
Andrea Fresa et al., An offloading algorithm to maximize edge device inference accuracy in edge intelligent system, MSWiM procedure (In 2022). dspace.networks.imdea.org/handle/20.500.12761/1613
Conference: mswimconf.com/2022/
Provided by
Network Institute IMDEA
Quote: Researchers create an algorithm that maximizes the inference accuracy of IoT sensors using edge computing (2022, October 25) retrieved October 25, 2022 from https:// /techxplore.com/news/2022-10-algorithm-maximizes-iot-sensor-inference.html
This document is the subject for the collection of authors. Other than any fair dealing for personal study or research purposes, no part may be reproduced without written permission. The content provided is for informational purposes only.