Home > Tracks > DSP and Machine Learning >

Object Classification Techniques using the OpenMV Cam H7

Presented by Lorenzo Rizzello

Machine Learning for embedded systems has recently started to make sense: on-device inference reduces latency, costs and minimizes power consumption compared to cloud-based solutions. Thanks to Google TFLite Micro, and its optimized ARM CMSIS NN kernel, on-device inference now also means microcontrollers such as ARM Cortex-M processors.

In this session, we will examine machine vision examples running on the small and power-efficient OpenMV H7 camera. Attendees will learn what it takes to train models with popular desktop Machine Learning frameworks and deploy them to a microcontroller. We will take a hands-on approach, using the OpenMV camera to run the inference and detect objects placed in front of the camera.

Go to Session

Low-Power Algorithmic Approaches in DSP Implementations

Presented by Bryant Sorensen

Hearing aid signal processing is a challenging task because of the extreme low-power, highly-constrained cycle performance required. The audio signal processing is always on, and requires complex algorithms and computations. A typical hearing aid will have multi-band analysis and synthesis, automatic feedback cancellation, environment detection and action, automatic gain control, and user interface - and AI is arriving as well. In order to reconcile the two disparate requirements (complexity vs. low power & reduced cycles), various approaches are needed to achieve low power while still providing sophisticated calculations. In this talk, I will discuss a sampling of numerical methods, shortcuts, refactorings, and approximations which significantly lower power in DSP algorithms. This will be an overview which I hope sparks thinking to extend the presented concepts to other low-power algorithmic tasks. While the focus is on algorithms and computations, some of these topics will also touch on implications to HW design, HW vs. FW tradeoffs, and ASIP / programmable DSP core design.

Go to Session

Causal Bootstrapping

Presented by Max Little

To draw scientifically meaningful conclusions and draw reliable statistical signal processing inferences of quantitative phenomena, signal processing must take cause and effect into consideration (either implicitly or explicitly). This is particularly challenging when the relevant measurements are not obtained from controlled experimental (interventional) settings, so that cause and effect can be obscured by spurious, indirect influences. Modern predictive techniques from machine learning are capable of capturing high-dimensional, complex, nonlinear relationships between variables while relying on few parametric or probabilistic modelling assumptions. However, since these techniques are associational, applied to observational data they are prone to picking up spurious influences from non-experimental (observational) data, making their predictions unreliable. Techniques from causal inference, such as probabilistic causal diagrams and do-calculus, provide powerful (nonparametric) tools for drawing causal inferences from such observational data. However, these techniques are often incompatible with modern, nonparametric machine learning algorithms since they typically require explicit probabilistic models. I this talk I'll describe causal bootstrapping, a new set of techniques we have developed for augmenting classical nonparametric bootstrap resampling with information about the causal relationship between variables. This makes it possible to resample observational data such that, if it is possible to identify an interventional relationship from that data, new data representing that relationship can be simulated from the original observational data. In this way, we can use modern statistical machine learning and signal processing algorithms unaltered to make statistically powerful, yet causally-robust, inferences.

Go to Session

Timing Synchronization in Software Defined Radios (SDR)

Presented by Qasim Chaudhari

A Software Defined Radio (SDR) merges the two fields of digital communication and digital signal processing into an efficient implementation of transmitters and receivers. One outcome of this combination is an interesting perspective on how timing synchronization is performed in digital communication receivers. This session will explain the timing synchronization problem in both time and frequency domains and then discuss in detail a timing locked loop consisting of timing error detectors, loop filter, interpolation and interpolation control. Insights into the relation of timing synchronization with general receiver design will also be presented.

Go to Session

Get Started with TinyML

Presented by Jan Jongboom

TinyML is opening up incredible new applications for sensors on embedded devices, from predictive maintenance to health applications using vibration, audio, biosignals and much more! 99% of sensor data is discarded today due to power, cost or bandwidth constraints. 

This webinar introduces why ML is useful to unleash meaningful information from that data, how this works in practice from signal processing to neural networks, and walks the audience through hands-on examples of gesture and audio recognition using Edge Impulse.

What you will learn:

  • What is TinyML and why does it matter for real-time sensors on the edge
  • Understanding of the applications and types of sensors that benefit from ML
  • What kinds of problems ML can solve and the role of signal processing
  • Hands-on demonstration of the entire process: sensor data capture, feature extraction, model training, testing and deployment to any device

Go to Session

The Past, Present, and Future of Embedded Machine Learning

Presented by Pete Warden

Pete Warden, from Google's TensorFlow Lite Micro project, will be talking about how machine learning on embedding devices began, and where it's heading. ML has been deployed to microcontrollers and DSPs for many years, but until recently it has been a niche solution for very particular problems. As deep learning has revolutionized the analysis of messy sensor data from cameras, microphones, and accelerometers it has begun to spread across many more applications. He will discuss how voice interfaces are leading the charge for ML on low-power, cheap devices, and what other uses are coming. He'll also look into the future of embedded machine learning to try to predict how hardware, software and applications will be evolving over the next few years.

Go to Session

Register

and gain access to 35+ hours of amazing content from some of the top luminaries in the Embedded Systems industry

By checking this box, you agree to our privacy policy.
By checking this box, you agree to receive more information from Embedded Online Conference and its sponsors.