aboutsummaryrefslogtreecommitdiff
path: root/src/backends/neon/workloads/NeonBatchNormalizationWorkload.hpp
AgeCommit message (Collapse)Author
2020-11-13IVGCVSW-5328-5329 Fuse ActivationMike Kelly
* Added Fused Activation Optimization to both CL and Neon backends. * Added Fused Activation support to all the CL and Neon workloads that support it. * Changed ProfilingTest network to be a Convolution layer followed by an Abs layer rather than an Activation layer. * Added IBackendInternal::OptimizeSubgraphView function that can accept a ModelOptions. * Network will now call OptimizeSubgraphView passing in the ModelOptions. Signed-off-by: Keith Davis <keith.davis@arm.com> Signed-off-by: Mike Kelly <mike.kelly@arm.com> Signed-off-by: Teresa Charlin <teresa.charlinreyes@arm.com> Change-Id: Ib536ac3cbafc7d9b35c139ad9a65b7735262cd9d
2019-01-08Refactor: Don't include all ComputeLibrary function definitions everywhere.Matthew Bentham
Just include the function definition that is specifically needed for each workload. Also, tighten up the scope where Compute Library functions are available. Knocks about 30seconds off a 4m30s single-threaded compile of the Neon workloads. Change-Id: Idac438f3bc77ff978295fbc9505cb42447def145
2018-12-31MLCE-80 Remove strong typing from NeonBatchNormalizationMatthew Bentham
Technical debt work towards adding some new Neon workloads Change-Id: I08ab6dd14d0e89d4ebc8a878fb69caa5681012bf