Live Q&A - Tiny Machine Vision: behind the scenes
Presented by Lorenzo Rizzello
Live Q&A with Lorenzo Rizzello, following his talk titled 'Tiny Machine Vision: behind the scenes'Make your IoT device feel, hear and see things with TinyML
Presented by Jan Jongboom
Many IoT devices are very simple: just a radio sending raw sensor values to the cloud. But this limits the usefulness of a deployment. A sensor can send that it saw movement in front of it, but not what it saw. Or a sensor might notice that it's being moved around, but not whether it's attached to a vehicle or is just being carried around. The reason is simple: for knowing what happens in the real world you'll need lots of data, and sending all that data over your IoT network quickly drains your battery and racks up your network bill.
How can we do better? In this talk we'll look at ways to draw conclusions from raw sensor data right on the device. From signal processing to running neural networks on the edge. It's time to add some brains to your IoT deployment. In this talk you'll learn:
- What is TinyML, and how can your sensors benefit from it?
- How signal processing can help you make your TinyML deployment more predictable, and better performing.
- How you can start making your devices feel, hear and see things - all running in realtime on Cortex-M-class devices.
- Hands-on demonstrations: from initial data capture from real devices, to building and verifying TinyML models, and to deployment on device
Want to Reduce Power in Always-on IoT Devices? Analyze First
Presented by Tom Doyle
Hundreds of millions of portable smart speakers are listening for a wake word. Millions more acoustic event-detection devices are listening for window breaks, baby cries or dog barks. Consumers appreciate how easy it is to use their always-on listening devices – but the battery drain that results from continuously processing all sounds in their environment? Not so much.
The problem is that this massive number of battery-powered IoT devices are notoriously power-inefficient in the way that they handle sound data. Relying on the age-old “digitize-first” system architecture, these devices digitize all the incoming sensor data as soon as they enter the device; then the data are processed for relevance, and in some cases, sent to the cloud for further analysis and verification. Since 80-90% of all sound data are irrelevant in most always-listening IoT devices, the digitize-first approach wastes significant battery life.
This session will show attendees how an “analyze first” edge architecture that uses analogML at the front end of an always-listening device eliminates the wasteful digitization and processing of irrelevant data, to deliver unprecedented power-saving and data efficiency in IoT devices.
Session attendees will:
- Understand that while most of today’s machine learning is implemented digitally, machine learning can also be implemented in ultra-low-power programmable analog blocks (analogML) so that feature extraction and classification can be performed on a sensor’s native analog data.
- Understand that the power problem for IoT devices is really a problem of the device treating all data as equally important and that determining which data are important earlier in the signal chain — while the data are still analog — reduces the amount of data that are processed through higher-power digital components. This approach saves up to 10x in system power in IoT devices.
- Learn how to integrate this new analogML edge architecture with sensors and MCUs from leading semiconductor suppliers into current and next-generation IoT devices.
Tiny Machine Vision: behind the scenes
Presented by Lorenzo Rizzello
Tiny devices, like the ones suitable for low-power IoT applications, are now capable of extracting meaningful data from images of the surrounding environment.
Machine vision algorithms, even Deep Learning powered ones, need only a few hundred kilobytes of ROM and RAM to run. But what are the optimizations involved to execute on constrained hardware? What is it possible to do, and how does it really work?
In this session, we will focus on the capabilities that are available for Cortex-M microcontrollers, starting from the user-friendly environment provided by EdgeImpulse to train and deploy Machine Learning models to the OpenMV Cam H7+.
We will guide attendees through the process using a straightforward example that illuminates inner workings so that attendees can get a grasp on technologies and frameworks. Attendees will walk away understanding the basic principles and be able to apply them not just to the Cortex-M but beyond.
Server and Edge AI for Tackling IIoT Data Glut
Presented by Altaf Khan
Cloud-based IIoT servers are receiving too much data, far too frequently, from an increasing number of edge devices. We present a complementary pair of AI solutions for reducing the data sent from the sensor and for efficiently processing it when it reaches the cloud server. The AI deployed at the sensor intelligently extracts insights from raw data with the help of inexpensive microcontrollers while operating on µWs of battery power. The server-side AI translates the insights received from a multitude of edge devices into decisions rapidly while employing a minimum of resources. The result is a low-latency, high-throughput cloud-based IIoT system.Live Q&A - Want to Reduce Power in Always-on IoT Devices? Analyze First
Presented by Tom Doyle
Live Q&A with Tom Doyle following his talk titled 'Want to Reduce Power in Always-on IoT Devices? Analyze First'Live Q&A Make your IoT device feel, hear and see things with TinyML
Presented by Jan Jongboom
Live Q&A with Jan Jongboom following his talk titled 'Make your IoT device feel, hear and see things with TinyML'Live Q&A - Server and Edge AI for Tackling IIoT Data Glut
Presented by Altaf Khan
Live Q&A with Altaf Khan following his talk titled 'Server and Edge AI for Tackling IIoT Data Glut'DSP/ML computing libraries for IoT
Presented by Laurent Le Faucheur
CMSIS-NN and CMSIS-DSP provide developers with a collection of efficient neural network kernels aimed at maximizing performance and minimizing the memory footprint on Cortex-M processors for applications that require machine learning and DSP capabilities. Join Laurent Le Faucheur, Principal IoT Software Engineer at Arm as he shares the latest developments for these computing libraries and how they can be used efficiently with future processing technology, including the Arm Cortex-M55 processor.
Machine Learning with Python: Introduction to Clustering and Classification
Presented by Matous Cejnek
In this workshop, the difference between clustering and classification will be explained with illustrative examples. The examples will demonstrate utilization of common machine learning algorithms (random forests, k-means, SVM) implemented in popular Python libraries.
Notebooks:
Introduction to Machine Learning and Deep Learning
Presented by Peter McLaughlin
In 2016 the Google supercomputer AlphaGo beat the world champion of the board game Go, a highly complex mathematical game. This milestone demonstrated the possibilities of Artificial Intelligence and set the scene for new technologies which are now transforming our lives, from the way we drive to the way we buy clothes. Thanks to recent advances in graphics acceleration hardware and neural network development tools, the benefits of Artificial Intelligence are within reach for any business. This presentation introduces the underlying theory of Machine Learning and Deep Learning and explains how to practically apply it. Topics covered include the training process, model types, development tools, common pitfalls and real-life examples. Attendees will walk away with a kick start to help them apply Machine Learning and Deep Learning in their projects.TinyML for Fun and Profit
Presented by Pete Warden
Machine learning allows small, battery-powered devices to understand speech, protect wildlife, diagnose and even treat diseases. In this talk, Pete Warden, lead of the TensorFlow Lite Micro open source project, will talk about how embedded ML opens up a new world of possibilities, and will introduce some easy ways to get started.Training and Deploying ML models to STM32 Microcontrollers
Presented by Jacob Beningo
Machine learning (ML) has often been considered a technology that operates on high-end servers and doesn’t have a place in traditional embedded systems. That perception is quickly changing. This workshop will explore how embedded software engineers can get started with machine learning for microcontroller based systems.
This session balances theory with practical hands-on experience using an STM32 development board.
Attendees will learn:
- How to collect and classify data
- Methods available to embedded developers to train a model
- Hands-on experience training a model
- How to convert a model to run on an STM32 MCU
- How to run an inference on a microcontroller
Additional details for development board and tools will be provided closer to the conference.
Data-Centric AI for Signal Processing Applications
Presented by Frantz Bouchereau
Presented by MathWorks
Model-centric approaches to solve AI problems have been dominant in applications where large and high-quality datasets are available. Such approaches aim to improve model performance through the development of more complex architectures.
In signal processing applications, where data is usually scarce and noisy and where advanced models and architecture experts are hard to find, a potentially more fruitful approach is a data-centric one that focuses on improving the data to make simpler network architectures perform better. The idea is to enhance signal data by improving its labels, removing noise, and reducing variance and dimensionality. This idea can be extended to include transforming signals into domains where key features become more prominent and easier to distinguish.
In this talk I go over various examples that follow a data-centric approach to improve AI-model performance. I show how signal processing techniques like time-frequency transformations, filtering, denoising, multiresolution analysis, data synthesis, and data augmentation can be used to obtain state-of-the-art results with simpler AI architectures in the solution of signal classification, signal regression, and anomaly detection problems.
Deep Learning Inference in Embedded Systems
Presented by Jeff Brower
The path to developing, testing, and deploying deep learning inference in embedded system products is marked by obstacles you must avoid. To do it sucessfully -- and avoid your investors and managers losing faith -- here are some crucial, high level guidlines.Advances in Machine Learning for DSP
Presented by Max Little
Machine learning has advanced to the point where it is now feasible to incorporate it directly into DSP applications. The major advantage is in the ability for DSP algorithms to "learn" to perform well. In this talk we will discuss how methods such as nonlinear and non-Gaussian inference and Bayesian nonparametrics can be exploited to develop novel DSP algorithms with a much higher level of specificity and efficiency than classical LTI methods.
Become A DSP Tuning Master and Build More Efficient Neural Networks
Presented by Alex Elium
Presented by Edge Impulse
Sensor data is typically preprocessed with DSP in TinyML applications. As engineers deploy NNs on ever smaller processors, it is becoming necessary to tune DSP algorithms in order to fit within RAM or real-time processing constraints. But not all steps in a DSP pipeline are created equal! Knowing how to find sections to slim down can mean the difference between giving up a few percent of accuracy, and ending up with a model that’s no longer usable.
This presentation will show experimentation with DSP parameter choices (number of cepstral coefficients, spectrogram frame size, etc) for an example keyword spotting classifier, and analyze the RAM, latency, and accuracy impacts of various scenarios. Attendees will leave with ideas on where to find elusive kB of RAM and mS of latency next time they need to optimize a DSP pipeline.
Live Q&A with Laurent Le Faucheur - DSP/ML computing libraries for IoT
Presented by Laurent Le Faucheur
Live Q&A with Laurent Le Faucheur following his talk titled "DSP/ML computing libraries for IoT"Real Life Embedded ML + The AI-Powered Nose
Presented by Jenny Plunkett
Presented by Edge Impulse
- So you want to bring the power of edge intelligence to your IoT device, how do you get started?
- Utilize data-driven engineering to collect your sensor data and upload directly into Edge Impulse
- Examples of real life applications of Edge AI for IoT microcontrollers (including an AI-powered nose)
Live Q&A with Alex Elium - Become A DSP Tuning Master and Build More Efficient Neural Networks
Presented by Alex Elium
Presented by Edge Impulse
Live Q&A with Alex Elium for this talk titled "Become A DSP Tuning Master and Build More Efficient Neural Networks"Live Q&A with Frantz Bouchereau - Data-Centric AI for Signal Processing Applications
Presented by Frantz Bouchereau
Live Q&A with Frantz Bouchereau for the talk titled "Data-Centric AI for Signal Processing Applications"Live Q&A with Matous Cejnek - Machine Learning with Python: Introduction to Clustering and Classification
Presented by Matous Cejnek
Live Q&A with Matous Cejnek - Machine Learning with Python: Introduction to Clustering and ClassificationLive Q&A with Jeff Brower - Deep Learning Inference in Embedded Systems
Presented by Jeff Brower
Live Q&A with Jeff Brower - Deep Learning Inference in Embedded SystemsLive Q&A with Pete Warden - TinyML for Fun and Profit
Presented by Pete Warden
Live Q&A with Pete Warden - TinyML for Fun and Profit