diff options
author | Viet-Hoa Do <viet-hoa.do@arm.com> | 2022-12-14 14:49:56 +0000 |
---|---|---|
committer | Viet-Hoa Do <viet-hoa.do@arm.com> | 2022-12-23 14:11:34 +0000 |
commit | 04f4620cf999846a44089c81720aa920edec6993 (patch) | |
tree | 1c0080ac59d5b2aa500cd2b2ceffe0575e22a4b6 /src/dynamic_fusion/sketch/gpu/GpuKernelComponentGraph.cpp | |
parent | 81fdaddaf36cb4c7ff0d2c52a370dd977a13dc72 (diff) | |
download | ComputeLibrary-04f4620cf999846a44089c81720aa920edec6993.tar.gz |
Add multiple output support for dynamic fusion
* The dependency graph now can schedule any acyclic graph into
a sequential list of operators. This is needed as the output
operators now form branches in the graph.
* Fix the definition of input, output and intermediate tensors
in GpuKernelComponentGroup to support non-linear but sequential
list of operators.
* Add constraint on GpuOperatorGroup to enforce strictly linear
fusion style, but allow output operator as the only form of
branch.
Resolves: COMPMID-5771
Signed-off-by: Viet-Hoa Do <viet-hoa.do@arm.com>
Change-Id: I68de3a31a2456145081f0a397e4e61dd66327682
Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/8823
Reviewed-by: Gunes Bayir <gunes.bayir@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
Diffstat (limited to 'src/dynamic_fusion/sketch/gpu/GpuKernelComponentGraph.cpp')
-rw-r--r-- | src/dynamic_fusion/sketch/gpu/GpuKernelComponentGraph.cpp | 37 |
1 files changed, 8 insertions, 29 deletions
diff --git a/src/dynamic_fusion/sketch/gpu/GpuKernelComponentGraph.cpp b/src/dynamic_fusion/sketch/gpu/GpuKernelComponentGraph.cpp index 1f90aab477..669913ce30 100644 --- a/src/dynamic_fusion/sketch/gpu/GpuKernelComponentGraph.cpp +++ b/src/dynamic_fusion/sketch/gpu/GpuKernelComponentGraph.cpp @@ -93,40 +93,19 @@ GpuKernelComponentStream GpuKernelComponentGraph::fuse() const { // Obtain memory descriptor map const auto mem_map = assign_memory_descriptors(_tensors, _dependency_graph); - /// @note Fusion constraints (for kernel components) are exactly the same as the invariants of @ref GpuKernelComponentGroup - /// Fusion can be framed as a mathematical optimization problem: - /// Given fusion constraints, find the "best" fusion patterns possible - /// "Best" is ill-defined at the moment. For now we define "best" fusion pattern as one - /// which results in the least number of fused kernels ( @ref GpuKernelComponentGroup ) at the end - - /// As the first iteration, we offer a sub-optimal algorithm here which ensures all - /// constraints are met, but provides no guarantee that the fusion pattern is optimal GpuKernelComponentStream stream{ _services, mem_map }; - // Break down into linear groups of components (constraint 1), preserving topological order - const auto linear_graphs = _dependency_graph.topological_partition(); + const auto op_seq = _dependency_graph.build_operators_sequence(); - // Further divide up the linear groups based on rest of the fusion constraints (rely on component group's invariants) - for(const auto &graph : linear_graphs) + stream.new_component_group(); + for(auto op : op_seq) { - for(unsigned int i = 0; i < graph.size(); ++i) - { - const auto comp = _components.at(graph[i].op).get(); - // Each new linear graph signals a new component group in the stream - if(i == 0) - { - stream.new_component_group(); - } - // If it violates the component group's invariant / fusion constraint, breaks up the stream by inserting a new group - bool success = stream.add_component(comp); - if(!success) - { - stream.new_component_group(); - success = stream.add_component(comp); - ARM_COMPUTE_ERROR_ON(!success); - } - } + const auto component = _components.at(op.op).get(); + const auto success = stream.add_component(component); + ARM_COMPUTE_ERROR_ON(!success); + ARM_COMPUTE_UNUSED(success); } + return stream; } } // namespace dynamic_fusion |