Lorenzo Rizzello is an embedded software engineer at Schindler Group, where he contributes to designing the software for the next generation of elevators. Before joining the group, he gained experience in startup environments both in Italy and the US working for Zerynth and Skypersonic, where he explored the world of embedded systems from IoT to UAV applications. His experience developing firmware in Python for microcontrollers led to the collaboration with Jacob Beningo on Python-based environments for ML applications on resource-constrained devices.
Lorenzo holds a Master's degree in Robotics and Automation Engineering from the University of Pisa.
Object Classification Techniques using the OpenMV Cam H7 (2020)Status: Available Now
Machine Learning for embedded systems has recently started to make sense: on-device inference reduces latency, costs and minimizes power consumption compared to cloud-based solutions. Thanks to Google TFLite Micro, and its optimized ARM CMSIS NN kernel, on-device inference now also means microcontrollers such as ARM Cortex-M processors.
In this session, we will examine machine vision examples running on the small and power-efficient OpenMV H7 camera. Attendees will learn what it takes to train models with popular desktop Machine Learning frameworks and deploy them to a microcontroller. We will take a hands-on approach, using the OpenMV camera to run the inference and detect objects placed in front of the camera.
Live Q&A - Tiny Machine Vision: behind the scenes (2020)Status: Available Now
Live Q&A with Lorenzo Rizzello, following his talk titled 'Tiny Machine Vision: behind the scenes'
Tiny Machine Vision: behind the scenes (2020)Status: Available Now
Tiny devices, like the ones suitable for low-power IoT applications, are now capable of extracting meaningful data from images of the surrounding environment.
Machine vision algorithms, even Deep Learning powered ones, need only a few hundred kilobytes of ROM and RAM to run. But what are the optimizations involved to execute on constrained hardware? What is it possible to do, and how does it really work?
In this session, we will focus on the capabilities that are available for Cortex-M microcontrollers, starting from the user-friendly environment provided by EdgeImpulse to train and deploy Machine Learning models to the OpenMV Cam H7+.
We will guide attendees through the process using a straightforward example that illuminates inner workings so that attendees can get a grasp on technologies and frameworks. Attendees will walk away understanding the basic principles and be able to apply them not just to the Cortex-M but beyond.