ExecuTorch support
This change introduces a framework abstraction layer and adds initial support for the ExecuTorch backend alongside TensorFlow Lite Micro (TFLM).
Highlights:
- Introduced
fwk/module with backend-agnostic interfaces to decouple use-case logic from ML frameworks. - Isolated TFLM-specific logic into
fwk/tflm; added ExecuTorch based implementation underfwk/ExecuTorch. - Enabled selection of ML framework via
ML_FRAMEWORK_BUILDCMake flag.
ExecuTorch Integration:
- Added ExecuTorch model and tensor classes.
- Integrated native (host) inference flow.
- Enabled AOT compilation pipeline via
aot_arm_compilerand.ptegeneration. - Added portable ops shared library.
- Added data layout conversions (NHWC → NCHW) in common API.
Infrastructure & Setup:
- Refactored setup scripts with improved structure and Pylint compliance.
- Introduced
--parallelflag to speed up model setup. - Extended script to generate ExecuTorch models and support conditional use-case setup.
Documentation:
- Updated all use-case documentation to reflect framework compatibility.
This lays the groundwork for future extensibility, enabling use-cases to support multiple ML runtimes with minimal duplication.
Edited by Alex Tawse