Add user example for FP16 MatMul using Arm® Neon™
This example demonstrates the usage of the FP16 packing and matmul routines which:
-
Packs the bias and the weights together into a single tensor.
-
Performs a matrix multiplication of the activations and the packed tensor.
All tensors are in half precision floating-point (FP16) data type.
Signed-off-by: Jakub Sujak jakub.sujak@arm.com
Edited by Jakub Sujak