diff options
author | Viet-Hoa Do <viet-hoa.do@arm.com> | 2022-12-13 13:09:10 +0000 |
---|---|---|
committer | Viet-Hoa Do <viet-hoa.do@arm.com> | 2022-12-16 15:17:51 +0000 |
commit | b84e25313e5dc7acbc03623e1e071e845047c111 (patch) | |
tree | fbee083f1262017555c64c3280da45e2b638992e /src/dynamic_fusion/sketch/utils/DependencyGraph.h | |
parent | a0ae8d2e6c57fd95c0edaf659b9df8b8c540d051 (diff) | |
download | ComputeLibrary-b84e25313e5dc7acbc03623e1e071e845047c111.tar.gz |
Add output operator for dynamic fusion
* The output of the fused operator must be explicitly specified
using GpuOutput operator.
* Any temporary tensors used to connect the output of an operator
to the input of another operator will be marked as no-alloc
and won't be allocated as a tensor in the memory.
Resolves: COMPMID-5771
Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Change-Id: I5ae8e800f8f737db23a055a92b01c4f1d78c3bb8
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8794
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: SiCong Li <sicong.li@arm.com>
Reviewed-by: Gian Marco Iodice <gianmarco.iodice@arm.com>
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Benchmark: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'src/dynamic_fusion/sketch/utils/DependencyGraph.h')
-rw-r--r-- | src/dynamic_fusion/sketch/utils/DependencyGraph.h | 27 |
1 files changed, 27 insertions, 0 deletions
diff --git a/src/dynamic_fusion/sketch/utils/DependencyGraph.h b/src/dynamic_fusion/sketch/utils/DependencyGraph.h index 03678defae..633c5e4263 100644 --- a/src/dynamic_fusion/sketch/utils/DependencyGraph.h +++ b/src/dynamic_fusion/sketch/utils/DependencyGraph.h @@ -417,6 +417,33 @@ public: } return tensors; } + /** Get intermediate tensors of the whole graph. + * + * @return std::vector<TensorId> + */ + std::vector<TensorId> intermediate_tensors() const + { + std::vector<TensorId> tensors; + + // If a tensor is used to connect the input of an operator and the output of another operator, + // it is not allocated in the memory. The tensor exists as a temporary variable only. + for(auto src_tensor : _adj_src_ops) + { + if(!src_tensor.second.empty()) + { + const auto dst_tensor = _adj_dst_ops.find(src_tensor.first); + if(dst_tensor != _adj_dst_ops.end()) + { + if(!dst_tensor->second.empty()) + { + tensors.push_back(src_tensor.first); + } + } + } + } + + return tensors; + } /** Get all root ops. Root ops can also be referred to as "src ops" of the whole graph * * @return std::vector<OperatorId> |