aboutsummaryrefslogtreecommitdiff
path: root/src/core/NEON/kernels
diff options
context:
space:
mode:
authorGunes Bayir <gunes.bayir@arm.com>2024-03-06 09:58:40 +0000
committerGunes Bayir <gunes.bayir@arm.com>2024-03-11 10:02:41 +0000
commit9167c9cd1c684218f76a3c0ec97574dd6f381b98 (patch)
tree7a9608f1f6861ad164697a0bbdc784be92a8d3e5 /src/core/NEON/kernels
parente77736fe4150648d2fd0649cf61c1bade928d69d (diff)
downloadComputeLibrary-9167c9cd1c684218f76a3c0ec97574dd6f381b98.tar.gz
Prefer indirect Gemm vs. Direct convolution if supported
Indirect GEMM uses optimized assembly path while Direct Conv uses the fallback Acl kernel for convolution. In certain cases, where the input tensor is large and filter size is greater than 7 (e.g. 9x9 filters), heuristics fall back to Direct Conv algorithm where it could still prefer the assembly path if the data layout is NHWC. This is more important when SME2 kernels are present. Resolves: COMPMID-6900 Change-Id: Ia611c975eee0423615113fcaeaa8f9eef0421456 Signed-off-by: Gunes Bayir <gunes.bayir@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/11254 Benchmark: Arm Jenkins <bsgcomp@arm.com> Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Anitha Raj <Anitha.Raj@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'src/core/NEON/kernels')
0 files changed, 0 insertions, 0 deletions