-
Yunus Kalkan authored
* Add llmFramework build option to select between LLM backends (e.g., llama.cpp, onnxruntime-genai) * Integrate llmFramework selection into native build via Gradle arguments * Replace shell script with Python script for pushing models and configuration files * Add ONNX Runtime specific configuration files (e.g., onnxConfigUser.json) * Update README with LLM framework usage instructions and build options * Update the referenced LLM repository commit to enable ONNX Runtime compatibility Change-Id: Ie998213a79828fc647b8e9b371e6ab0bcb2b0f01 Signed-off-by:
Yunus Kalkan <yunus.kalkan@arm.com>
b37cdecb
Loading