Hey, everyone. Welcome to this episode of Connect. Thank you for joining. We have a really interesting topic today with a great guest. I'm joined by David Schwartz. He's a solutions engineer from Edge Impulse. Thanks for joining us, David. Great to be here. All right. So David, can you tell us a little more about what Edge Impulse is and what you do there? Yeah, definitely. So Edge Impulse is the leading software platform for embedded machine learning. And what we're really all about is putting those ML techniques and algorithms into the hands of any developer, regardless of their data science or machine learning experience. At Edge Impulse, I'm a solutions engineer, which means I work with our customers and users in order to help them go from data collection, model training, validation, and then deployment into production environments. OK, great. Yes, and so I know you leverage the TI CC1352P platform. So two questions for you. I've heard a lot about machine learning and embedded processing. Can you explain what is embedded machine learning, and then how does it work on the CC1352P platform? Yeah, definitely. So embedded machine learning is really all about sensor data and making use of it. We call it data-driven engineering here at Edge Impulse. And what this really is and what it actually means is that when you're looking to solve a problem, develop a new sensor algorithm, design a new use case or an interface with a device, you can, using embedded ML, take input data, train an algorithm, compare and evaluate its performanc. And then as you encounter places where that algorithm might have difficulty or might not perform well, you go back and you collect more data to cover that use case. And through that iterative process, you're able to build up a learning algorithm that performs very well in a variety of contexts. And this is very powerful. It allows you to rapidly prototype and develop new techniques and new algorithms faster than you could if you were manually tuning or manually designing a filter. And the embedded part of that is at the end, once you have that algorithm, we can go in and we can deploy that in a super optimized, memory-constrained, compute-constrained package that will run on an embedded microcontroller. Makes sense. Yeah, and it's very exciting because you're taking some of these complicated, sophisticated techniques, like you said, and allowing for easier, much more accessible rapid prototyping and development. And what's exciting to me is that's going to unlock a whole new set of applications for embedded machine learning. Can you give us an example of maybe what a real-world application is for embedded machine learning? Yeah. So a great one that many people are already familiar about is, in smart home applications, being able to interact with the user via voice, via gestures, in ways that weren't previously possible before these embedded ML techniques really became popular and became common. In addition to that, we're seeing a lot of really interesting things around things like predictive maintenance where you use all of the sensor data and telemetry that you can get off of a device. Maybe it's an industrial motor or a production line. And you can use that data to even predict, before it fails, if something's going wrong. And so there's a lot of different applications and a lot of different fields, but those are just a few interesting ones. Awesome. Yeah. Makes sense. And I know you have a demo today. People are probably sick of me asking questions, so let's get to it. Can you explain and describe the demo and then show it to us? Yeah. For the demo today, I'm going to quickly walk through what it takes to get started connecting your CC1352P LaunchPad to Edge Impulse. And then we're going to go through designing our first machine learning algorithm in our platform. First, you'll want to head to our website, edgeimpulse.com, and follow the guide for getting started with the TI LaunchPad. This guide will lead you through the process of setting up your board to sample data and interface with the Edge Impulse studio. And it'll then provide tutorials on introductory machine learning projects that you can build and deploy to the LaunchPad. These help get you an idea of collecting data, training, and then deploying models in devices. From these tutorials, you'll get an Edge Impulse project just like the one you see here. When you're building a project like this, the first thing you want to do is create a data set. In the Data acquisition tab, you can collect and label data from a variety of sources, including live sampling from your TI LaunchPad. You'll use this to build up a robust data set for your project. Once you have this data set, you can then select from a wide variety of DSP and learning algorithms provided by Edge Impulse in our Impulse design tab. This is the algorithm that will learn to classify or label data. For each of these algorithms and blocks, you keep full control of their operation, and you can even design your own. But these provide already optimized and tested DSP and learning algorithms that we, at Edge Impulse, have found to be effective across many applications. Once you've defined this algorithm, you can next train and evaluate your model via the DSP and Neural Network tabs. In the Neural Network tab specifically, you're able to define your model architecture using our UI, train the model, and then view metrics on the accuracy, performance, and compute costs of running the model. You can then also choose to leverage powerful Edge Impulse-developed tools, like this EON Tuner. This tool automatically searches and finds the optimal learning and DSP algorithm combination for your data set, yielding the highest accuracy while staying within the memory and compute limits of your device. From here, you want to validate your model performance via our Model testing tab. You'll build up a data set of real-world samples representing a variety of environments and contexts and use these to benchmark your algorithm's performance as you go through the iterative process of adding more data, tweaking parameters, and adding capabilities to your algorithm. Once you're happy with your testing results, you can now either save your project via our Versioning tool or go straight to the Deployment tab to export your trained algorithm. Here, you can choose to either deploy your model as a standalone C++ library, compatible with any C tool chain or development environment, or directly deploy to your LaunchPad through our prebuilt demo firmware. This demo firmware is the quickest way you can deploy a machine learning model to an embedded device. It supports real-time processing and classification of sensor data built in, and it outputs the results of the classification of the model you just trained over a serial port and can be viewed in a terminal. And that's about it. With Edge Impulse, this whole process of developing and deploying a model takes only around 15 to 20 minutes, so I encourage everyone to follow through the tutorials on our website and try this out for themselves. I hope you'll see the capabilities and potential for embedded machine learning with Edge Impulse and Texas Instruments, and thanks to everyone for watching. Thank you, David, for walking us through that. As I said, it's really exciting to see the partnership between TI and Edge Impulse, unlocking new possibilities and applications for embedded machine learning. Thank you so much for joining us. Thank you guys for tuning in. As always, if you want to learn more information, you can go to ti.com/wireless or edgeimpulse.com. We'll catch you in the next one. [MUSIC PLAYING]