What is the machine learning platform?
The machine learning platform is part of the Linaro Machine Intelligence Initiative and is the home for open-source software libraries (Arm NN and Arm Compute Library) that optimise the execution of machine learning workloads on Arm-based processors.
It enables a new era of advanced, ultra-efficient inference at the edge. Specifically designed for machine learning (ML) and neural network (NN) capabilities, the architecture is versatile enough to scale to any device, from the Internet of Things (IoT) to connected cars and servers.
|Arm NN||Arm NN is an inference engine for CPUs, GPUs and NPUs. It bridges the gap between existing NN frameworks and the underlying IP. It enables efficient translation of existing neural network frameworks, such as TensorFlow and Caffe, allowing them to run efficiently – without modification – across Arm Cortex CPUs and Arm Mali GPUs. For more details see: https://developer.arm.com/products/processors/machine-learning/arm-nn|
|Arm Compute Library||The Arm Compute Library contains a comprehensive collection of software functions implemented for the Arm Cortex-A family of CPU processors and the Arm Mali family of GPUs. It is a convenient repository of low-level optimized functions that developers can source individually or use as part of complex pipelines in order to accelerate their algorithms and applications. For more details see: https://developer.arm.com/technologies/compute-library|
The aim of the machine learning platform is to provide a home for the development of open-source software that optimises and simplifies the running of machine learning jobs on Arm-based processors.
We aim to create a thriving community of developers working together to make the machine learning platform a key resource for achieving ultra-efficient inference at the edge.
If you or your company are interested in participating in this effort, please visit the Contributing page. We welcome all feedback and participation in the development of the machine learning platform.