Skip to content

25.12 release

Alex Tawse requested to merge feature/merge-executorch-support into main

25.12 release

This change introduces a framework abstraction layer and adds initial support for the ExecuTorch backend alongside TensorFlow Lite Micro (TFLM).

Highlights:

  • Introduced fwk/ module with backend-agnostic interfaces to decouple use-case logic from ML frameworks.
  • Isolated TFLM-specific logic into fwk/tflm; added ExecuTorch based implementation under fwk/ExecuTorch.
  • Enabled selection of ML framework via ML_FRAMEWORK_BUILD CMake flag.

ExecuTorch Integration:

  • Added ExecuTorch model and tensor classes.
  • Integrated native (host) inference flow.
  • Enabled AOT compilation pipeline via aot_arm_compiler and .pte generation.
  • Added portable ops shared library.
  • Added data layout conversions (NHWC → NCHW) in common API.

Infrastructure & Setup:

  • Refactored setup scripts with improved structure and Pylint compliance.
  • Introduced --parallel flag to speed up model setup.
  • Extended script to generate ExecuTorch models and support conditional use-case setup.

Documentation:

  • Updated all use-case documentation to reflect framework compatibility.

This lays the groundwork for future extensibility, enabling use-cases to support multiple ML runtimes with minimal duplication.

Edited by Alex Tawse

Merge request reports

Loading