Skip to content

Add user example for FP16 MatMul using Arm® Neon™

Jakub Sujak requested to merge example/fp16 into main

This example demonstrates the usage of the FP16 packing and matmul routines which:

  1. Packs the bias and the weights together into a single tensor.

  2. Performs a matrix multiplication of the activations and the packed tensor.

All tensors are in half precision floating-point (FP16) data type.

Signed-off-by: Jakub Sujak jakub.sujak@arm.com

Edited by Jakub Sujak

Merge request reports