The NXP® eIQ® machine learning (ML) software development environment enables the use of ML algorithms on NXP EdgeVerse™ microcontrollers and microprocessors, including i.MX RT crossover MCUs, and i.MX family application processors. eIQ ML software includes a ML workflow tool called eIQ Toolkit, along with inference engines, neural network compilers and optimized libraries. This software leverages open-source and proprietary technologies and is fully integrated into our MCUXpresso SDK and Yocto development environments, allowing you to develop complete system-level applications with ease.
Automate your Large Language Models (LLM) for generative AI solutions. When used with i.MX applications processors, these solutions make it easier to deploy intelligence to the edge by training LLMs on specific contextual data.
On-demand training modules to help you with your ML application development.
Application Example Software
The Application Code Hub (ACH) repository enables engineers to easily find software examples, code snippets, application software packs and demos developed by our experts.
Machine Learning Models
The NXP eIQ® Model Zoo offers pre-trained models for a variety of domains and tasks that are ready to be deployed on supported products.
NXP’s eIQ Machine Learning software development platform makes Machine Learning at the edge possible for all levels of developers - from those just getting started, to the ML experts.® NN for object detection acceleration.
eIQ Auto deep learning (DL) toolkit enables developers to introduce DL algorithms into their applications and to continue satisfying automotive standards.
Discover five key factors developers need to consider when choosing a processing solution for their edge ML projects in this whitepaper from ABI Research.
Sign in to read the whitepaperThis application note focuses on handwritten digit recognition on embedded systems through deep learning, using i.MX RT MCUs, MCUXpresso SDK and eIQ technology.
Read the application noteeIQ software leverages inference engines, neural network compilers, optimized libraries and open-source technology allowing AI and ML enablement on edge nodes.
Read the factsheetThis Application Note describes the deployment of the eIQ’s LiteRT inference engine (formerly known as TensorFlow Lite), with NPU acceleration support on Android OS for i.MX Applications Processors.
Read the application note