One advantage that the IoT brought to design was the ability for a small local device to access the network’s virtually-unlimited computing power. The Amazon Echo is a classic example: a low-cost local device that provided powerful speech recognition AI and an immense application library by way of its Internet connection. Now, some of that AI is moving into the local device to help minimize bandwidth and latency concerns by employing an efficient form of machine learning (ML) for smaller devices.
An example of what can be accomplished by placing AI in edge devices can be found in the article AI helps turn gas sensor into electronic nose. In this instance the ML that generates the sensor’s algorithms takes place during the design cycle, and the local device simply runs the algorithm. This is a first step in bring AI to the edge, but there are more to come.
To reach its full potential, AI at the edge will need to be self-adaptive. This means that the edge device will have to implement ML locally. How, exactly, this is to be done with the limited compute power edge devices typically have available is currently the subject of considerable research and development. Providing a form for information and idea exchange in local machine learning is the goal of the tinyML Foundation.
The foundation held its first industry event – the tinyML Summit – in 2019 and generated considerable interest along with participation by more than 90 companies. That event revealed three essential trends:
Tiny ML-capable hardware is currently becoming “good enough” for many commercial applications with new and even better architectures on the horizon.
Algorithms, networks, and models have seen significant size reduction, with many sized down to 100 kBytes and below.
There is growing momentum demonstrated by technical progress and ecosystem development.
This result demonstrated that ML is not only coming to the edge, in some cases it is already there.
COVID-19 prevented a 2020 event, but for 2021 the tinyML Foundation created a free online event that recently concluded but should be available as an archive for registered attendees. In addition, the organization has developed a series of lectures called the tinyML Talks that are available on YouTube and other platforms.
The trend is clearly gaining traction. The organization’s sponsors now span the range from major hardware players such as Arm, Cypress Semiconductor, and Samsung to software start-ups focusing on low-power AI applications. Most are focused on either vision or audio (voice recognition) systems for now, but smart sensors are gaining ground as a viable application as well.
This trend bodes well for IoT developers. Creating compact, low-power devices with reasonable cost that perform complex tasks can be a developers nightmare using conventional programming techniques. Yet depending on connectivity to network-based AI processing for the device’s performance has its own drawbacks. Home networks are already becoming clogged with demands from streaming media and communications; adding a host of network-hogging smart devices can overload the typical home connection. The latency of network communications can also be an issue, as can be the total failure of device operation when the network is down.
Moving the AI to the edge – at least for basic functionality – solves most of these concerns. With ML in the edge device, developers can craft their systems to learn how to meet customer demands without the developers needing to exhaustively analyze use cases in advance. Having AI in the edge device reduces the need for network bandwidth, eliminates network latency issues, and ensures operation in the network’s absence. The efforts to expand tiny ML technology will help speed the movement of AI into IoT devices.