diff options
author | Viet-Hoa Do <viet-hoa.do@arm.com> | 2022-12-14 14:49:56 +0000 |
---|---|---|
committer | Viet-Hoa Do <viet-hoa.do@arm.com> | 2022-12-23 14:11:34 +0000 |
commit | 04f4620cf999846a44089c81720aa920edec6993 (patch) | |
tree | 1c0080ac59d5b2aa500cd2b2ceffe0575e22a4b6 /src/dynamic_fusion/sketch/gpu/operators/GpuOutput.cpp | |
parent | 81fdaddaf36cb4c7ff0d2c52a370dd977a13dc72 (diff) | |
download | ComputeLibrary-04f4620cf999846a44089c81720aa920edec6993.tar.gz |
Add multiple output support for dynamic fusion
* The dependency graph now can schedule any acyclic graph into
a sequential list of operators. This is needed as the output
operators now form branches in the graph.
* Fix the definition of input, output and intermediate tensors
in GpuKernelComponentGroup to support non-linear but sequential
list of operators.
* Add constraint on GpuOperatorGroup to enforce strictly linear
fusion style, but allow output operator as the only form of
branch.
Resolves: COMPMID-5771
Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Change-Id: I68de3a31a2456145081f0a397e4e61dd66327682
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8823
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'src/dynamic_fusion/sketch/gpu/operators/GpuOutput.cpp')
-rw-r--r-- | src/dynamic_fusion/sketch/gpu/operators/GpuOutput.cpp | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/src/dynamic_fusion/sketch/gpu/operators/GpuOutput.cpp b/src/dynamic_fusion/sketch/gpu/operators/GpuOutput.cpp index 017536df6c..60c2281433 100644 --- a/src/dynamic_fusion/sketch/gpu/operators/GpuOutput.cpp +++ b/src/dynamic_fusion/sketch/gpu/operators/GpuOutput.cpp @@ -81,7 +81,7 @@ Status GpuOutput::validate_op(const GpuWorkloadSketch &sketch, const auto group = sketch.implementation().operator_group(); const auto op = group.new_operator(operator_type, tensors); - const auto success = group.try_add_operator(op); + const auto success = group.try_add_operator(op, true); ARM_COMPUTE_RETURN_ERROR_ON_MSG(!success, "This operator cannot be fused into the workload."); ARM_COMPUTE_UNUSED(success); @@ -133,7 +133,7 @@ void GpuOutput::create_op(GpuWorkloadSketch &sketch, tensors.add_const_tensor(ACL_DST_0, dst); const Operator op = sketch.implementation().operator_group().new_operator(operator_type, tensors); - sketch.implementation().operator_group().add_operator(op); + sketch.implementation().operator_group().add_operator(op, true); } } // namespace dynamic_fusion |