diff options
author | Keith Davis <keith.davis@arm.com> | 2021-07-20 11:25:22 +0100 |
---|---|---|
committer | Keith Davis <keith.davis@arm.com> | 2021-08-04 11:49:16 +0100 |
commit | 554fa09a0f3d6c9c572634c9d2de9bfb6c3218b0 (patch) | |
tree | 1820a2cadcc1f34667199acff2d044e5d2083ea2 /tests/InferenceModel.hpp | |
parent | 96fd98c28441618fbdf9376fe46a368ef06b19e1 (diff) | |
download | armnn-554fa09a0f3d6c9c572634c9d2de9bfb6c3218b0.tar.gz |
IVGCVSW-5980 JSON profiling output
* Add new ProfilingDetails class to construct operator details string
* Add new macro which helps append layer details to ostream
* Add ProfilingEnabled to NetworkProperties so that profiling can be
realised when loading the network
* Add further optional info to WorkloadInfo specific to convolutions
* Generalise some JsonPrinter functions into JsonUtils for reusability
* Remove explicit enabling of profiling within InferenceModel as it is
done when loading network
* Add ProfilingDetails macros to ConvolutionWorkloads for validation
Signed-off-by: Keith Davis <keith.davis@arm.com>
Change-Id: Ie84bc7dc667e72e6bcb635544f9ead7af1765690
Diffstat (limited to 'tests/InferenceModel.hpp')
-rw-r--r-- | tests/InferenceModel.hpp | 15 |
1 files changed, 2 insertions, 13 deletions
diff --git a/tests/InferenceModel.hpp b/tests/InferenceModel.hpp index 9eb3eab3d5..31075939ce 100644 --- a/tests/InferenceModel.hpp +++ b/tests/InferenceModel.hpp @@ -485,7 +485,8 @@ public: const auto loading_start_time = armnn::GetTimeNow(); armnn::INetworkProperties networkProperties(params.m_AsyncEnabled, armnn::MemorySource::Undefined, - armnn::MemorySource::Undefined); + armnn::MemorySource::Undefined, + enableProfiling); std::string errorMessage; ret = m_Runtime->LoadNetwork(m_NetworkIdentifier, std::move(optNet), errorMessage, networkProperties); @@ -563,10 +564,6 @@ public: } std::shared_ptr<armnn::IProfiler> profiler = m_Runtime->GetProfiler(m_NetworkIdentifier); - if (profiler) - { - profiler->EnableProfiling(m_EnableProfiling); - } // Start timer to record inference time in EnqueueWorkload (in milliseconds) const auto start_time = armnn::GetTimeNow(); @@ -617,10 +614,6 @@ public: } std::shared_ptr<armnn::IProfiler> profiler = m_Runtime->GetProfiler(m_NetworkIdentifier); - if (profiler) - { - profiler->EnableProfiling(m_EnableProfiling); - } // Start timer to record inference time in EnqueueWorkload (in milliseconds) const auto start_time = armnn::GetTimeNow(); @@ -672,10 +665,6 @@ public: } std::shared_ptr<armnn::IProfiler> profiler = m_Runtime->GetProfiler(m_NetworkIdentifier); - if (profiler) - { - profiler->EnableProfiling(m_EnableProfiling); - } m_Threadpool->Schedule(m_NetworkIdentifier, MakeInputTensors(inputContainers), |