ArmNN
 22.02
Graph Class Reference

#include <Graph.hpp>

Classes

struct  InputLayersAccessor
 Wrapper class returned by Graph::GetInputLayers() More...
 
class  LayerInGraph< InputLayer >
 Inputs add/remove their binding id to m_InputIds in the graph. More...
 
class  LayerInGraph< OutputLayer >
 Outputs add/remove their binding id to m_OutputIds in the graph. More...
 
struct  OutputLayersAccessor
 Wrapper class returned by Graph::GetOutputLayers() More...
 

Public Types

using LayerList = std::list< Layer * >
 
using Iterator = LayerList::const_iterator
 
using IteratorDifference = Iterator::difference_type
 
using ConstIterator = TransformIterator< decltype(&PtrCast< const Layer >), Iterator >
 
using ConstIteratorInputs = TransformIterator< decltype(&PtrCast< const InputLayer >), Iterator >
 
using ConstIteratorOutputs = TransformIterator< decltype(&PtrCast< const OutputLayer >), Iterator >
 

Public Member Functions

template<typename Func >
void ForEachLayer (Func func) const
 
 Graph (bool shapeInferenceMethod=false)
 
 Graph (const Graph &other)
 
Graphoperator= (const Graph &other)=delete
 
 Graph (Graph &&other)
 
Graphoperator= (Graph &&other)
 
 ~Graph ()
 
Status Print () const
 
Status SerializeToDot (std::ostream &stream)
 
template<typename LayerT , typename... Args>
LayerT * AddLayer (Args &&... args)
 Adds a new layer, of type LayerType, to the graph constructed with the arguments passed. More...
 
template<typename LayerT , typename... Args>
LayerT * InsertNewLayer (InputSlot &insertBefore, Args &&... args)
 Inserts a new layer between the output slot currently connected to insertBefore and insertBefore itself. More...
 
template<typename LayerT , typename... Args>
LayerT * InsertNewLayer (OutputSlot &insertAfter, Args &&... args)
 Inserts a new layer between insertAfter and the input slot(s) currently connected to it. More...
 
void EraseLayer (Iterator pos)
 Deletes the layer at the specified position. More...
 
template<typename LayerT >
void EraseLayer (LayerT *&layer)
 Deletes the layer. More...
 
Iterator begin ()
 Returns iterator pointing to the beginning of the list. Lowercase for range-based for loops. More...
 
Iterator end ()
 Returns iterator pointing to the end of the list. Lowercase for range-based for loops. More...
 
ConstIterator begin () const
 Returns const iterator pointing to the beginning of the list. Lowercase for range-based for loops. More...
 
ConstIterator end () const
 Returns const iterator pointing to the end of the list. Lowercase for range-based for loops. More...
 
ConstIterator cbegin () const
 Returns const iterator pointing to the beginning of the list. Lowercase for range-based for loops. More...
 
ConstIterator cend () const
 Returns const iterator pointing to the end of the list. Lowercase for range-based for loops. More...
 
GraphTopologicalSort ()
 Sorts layers in topological order and return this. More...
 
const GraphTopologicalSort () const
 
size_t GetNumInputs () const
 
size_t GetNumOutputs () const
 
InputLayersAccessor GetInputLayers () const
 Returns a wrapper object with begin(), end() methods to iterate over the input layers in a range-based for loop. More...
 
OutputLayersAccessor GetOutputLayers () const
 Returns a wrapper object with begin(), end() methods to iterate over the output layers in a range-based for loop. More...
 
size_t GetNumLayers () const
 
Status AllocateDynamicBuffers ()
 Allocates memory for all tensors under output tensor handers of each layer. More...
 
void AddCompatibilityLayers (std::map< BackendId, std::unique_ptr< class IBackendInternal >> &backends, TensorHandleFactoryRegistry &registry)
 Modifies the graph in-place, removing edges connecting layers using different compute devices, and relinking them via an intermediary copy layers. More...
 
void SubstituteSubgraph (SubgraphView &subgraph, IConnectableLayer *substituteLayer)
 Substitutes the given sub-graph with either a new layer or a new sub-graph. More...
 
void SubstituteSubgraph (SubgraphView &subgraph, const SubgraphView &substituteSubgraph)
 
void VerifyConstantLayerSetTensorInfo () const
 For each ConstantLayer in Graph, ensures TensorInfo is set on all output slots. More...
 
void InferTensorInfos ()
 
void AttachObservable (IGraphObservable *const observable, GraphEvent notifyOnEvent)
 
void DetachObservable (IGraphObservable *const observable, GraphEvent notifyOnEvent)
 
Iterator GetPosInGraph (Layer &layer)
 Gets the position of a layer in the graph. More...
 
const std::shared_ptr< IProfiler > & GetProfiler () const
 

Static Public Member Functions

template<typename LayerType >
static LayerTypePtrCast (Layer *const layer)
 

Friends

class SubgraphView
 

Detailed Description

Definition at line 30 of file Graph.hpp.

Member Typedef Documentation

◆ ConstIterator

using ConstIterator = TransformIterator<decltype(&PtrCast<const Layer>), Iterator>

Definition at line 56 of file Graph.hpp.

◆ ConstIteratorInputs

Definition at line 57 of file Graph.hpp.

◆ ConstIteratorOutputs

Definition at line 58 of file Graph.hpp.

◆ Iterator

using Iterator = LayerList::const_iterator

Definition at line 53 of file Graph.hpp.

◆ IteratorDifference

using IteratorDifference = Iterator::difference_type

Definition at line 54 of file Graph.hpp.

◆ LayerList

using LayerList = std::list<Layer*>

Definition at line 50 of file Graph.hpp.

Constructor & Destructor Documentation

◆ Graph() [1/3]

Graph ( bool  shapeInferenceMethod = false)
inline

Definition at line 98 of file Graph.hpp.

References Graph::operator=().

99  : m_LayersInOrder(true)
100  , m_ShapeInferenceMethod(shapeInferenceMethod ? ShapeInferenceMethod::InferAndValidate :
102  , m_Profiler(std::make_shared<IProfiler>())
103  {}
Validate all output shapes.
Infer missing output shapes and validate all output shapes.

◆ Graph() [2/3]

Graph ( const Graph other)

Definition at line 27 of file Graph.cpp.

References Layer::BeginOutputSlots(), Layer::Clone(), and Layer::GetInputSlot().

28 : m_LayersInOrder(other.m_LayersInOrder)
29 , m_Profiler(other.m_Profiler)
30 {
31  std::unordered_map<const Layer*, Layer*> otherToClonedMap;
32 
33  for (auto&& otherLayer : other.m_Layers)
34  {
35  Layer* const layer = otherLayer->Clone(*this);
36  otherToClonedMap.emplace(otherLayer, layer);
37  }
38 
39  // Copies slot connections.
40  for (auto&& otherLayer : other.m_Layers)
41  {
42  Layer* const thisLayer = otherToClonedMap[otherLayer];
43 
44  auto outputSlot = thisLayer->BeginOutputSlots();
45  for (auto&& otherOutputSlot : otherLayer->GetOutputSlots())
46  {
47  for (auto&& otherInputSlot : otherOutputSlot.GetConnections())
48  {
49  const Layer& otherTgtLayer = otherInputSlot->GetOwningLayer();
50  Layer* const thisTgtLayer = otherToClonedMap[&otherTgtLayer];
51 
52  InputSlot& inputSlot = thisTgtLayer->GetInputSlot(otherInputSlot->GetSlotIndex());
53  outputSlot->Connect(inputSlot);
54  }
55  outputSlot->SetTensorInfo(otherOutputSlot.GetTensorInfo());
56  ++outputSlot;
57  }
58  }
59 }

◆ Graph() [3/3]

Graph ( Graph &&  other)
inline

Definition at line 109 of file Graph.hpp.

110  {
111  *this = std::move(other);
112  }

◆ ~Graph()

~Graph ( )
inline

Definition at line 133 of file Graph.hpp.

References Graph::AddLayer(), Graph::EraseLayer(), Graph::ForEachLayer(), Graph::InsertNewLayer(), Graph::Print(), and Graph::SerializeToDot().

134  {
135  ForEachLayer([](Layer* layer)
136  {
137  delete layer;
138  });
139  }
void ForEachLayer(Func func) const
Definition: Graph.hpp:40

Member Function Documentation

◆ AddCompatibilityLayers()

void AddCompatibilityLayers ( std::map< BackendId, std::unique_ptr< class IBackendInternal >> &  backends,
TensorHandleFactoryRegistry registry 
)

Modifies the graph in-place, removing edges connecting layers using different compute devices, and relinking them via an intermediary copy layers.

Definition at line 301 of file Graph.cpp.

References ARMNN_ASSERT, ARMNN_ASSERT_MSG, armnn::CopyToTarget, armnn::DirectCompatibility, armnn::ExportToTarget, Graph::ForEachLayer(), Layer::GetBackendId(), OutputSlot::GetConnections(), OutputSlot::GetEdgeStrategies(), TensorHandleFactoryRegistry::GetFactory(), Layer::GetName(), Layer::GetOutputSlot(), Layer::GetOutputSlots(), InputSlot::GetOwningLayer(), InputSlot::GetSlotIndex(), OutputSlot::GetTensorHandleFactoryId(), ITensorHandleFactory::LegacyFactoryId, armnn::MemCopy, armnn::MemImport, OutputSlot::SetEdgeStrategy(), OutputSlot::SetTensorHandleFactory(), and armnn::Undefined.

Referenced by Graph::GetNumLayers(), armnn::Optimize(), and TEST_SUITE().

303 {
304  // Returns true if the given layer could potentially need an intermediate copy/import layer (depending on its
305  // connections to other layers).
306  auto MayNeedCompatibilityLayer = [](const Layer& layer)
307  {
308  // All layers should have been associated with a valid compute device at this point.
309  ARMNN_ASSERT(layer.GetBackendId() != Compute::Undefined);
310  // Does not need another compatibility layer if a copy or import layer is already present.
311  return layer.GetType() != LayerType::MemCopy &&
312  layer.GetType() != LayerType::MemImport;
313  };
314 
315  auto IsCompatibilityStrategy = [](EdgeStrategy strategy)
316  {
317  return strategy == EdgeStrategy::CopyToTarget ||
318  strategy == EdgeStrategy::ExportToTarget;
319  };
320 
321  ForEachLayer([this, &backends, &registry, MayNeedCompatibilityLayer, IsCompatibilityStrategy](Layer* srcLayer)
322  {
323  ARMNN_ASSERT(srcLayer);
324 
325  if (!MayNeedCompatibilityLayer(*srcLayer))
326  {
327  // The current layer does not need copy layers, move to the next one
328  return;
329  }
330 
331  const std::vector<OutputSlot>& srcOutputSlots = srcLayer->GetOutputSlots();
332  for (unsigned int srcOutputIndex = 0; srcOutputIndex < srcOutputSlots.size(); srcOutputIndex++)
333  {
334  OutputSlot& srcOutputSlot = srcLayer->GetOutputSlot(srcOutputIndex);
335  const std::vector<InputSlot*> srcConnections = srcOutputSlot.GetConnections();
336  const std::vector<EdgeStrategy> srcEdgeStrategies = srcOutputSlot.GetEdgeStrategies();
337  for (unsigned int srcConnectionIndex = 0; srcConnectionIndex < srcConnections.size(); srcConnectionIndex++)
338  {
339  InputSlot* dstInputSlot = srcConnections[srcConnectionIndex];
340  ARMNN_ASSERT(dstInputSlot);
341 
342  EdgeStrategy strategy = srcEdgeStrategies[srcConnectionIndex];
344  "Undefined memory strategy found while adding copy layers for compatibility");
345 
346  const Layer& dstLayer = dstInputSlot->GetOwningLayer();
347  if (MayNeedCompatibilityLayer(dstLayer) &&
348  IsCompatibilityStrategy(strategy))
349  {
350  // A copy layer is needed in between the source and destination layers.
351  // Record the operation rather than attempting to modify the graph as we go.
352  // (invalidating iterators)
353  const std::string compLayerName = fmt::format("[ {} ({}) -> {} ({}) ]",
354  srcLayer->GetName(),
355  srcOutputIndex,
356  dstLayer.GetName(),
357  dstInputSlot->GetSlotIndex());
358  Layer* compLayer = nullptr;
359  if (strategy == EdgeStrategy::CopyToTarget)
360  {
361  compLayer = InsertNewLayer<MemCopyLayer>(*dstInputSlot, compLayerName.c_str());
362  }
363  else
364  {
365  ARMNN_ASSERT_MSG(strategy == EdgeStrategy::ExportToTarget, "Invalid edge strategy found.");
366  compLayer = InsertNewLayer<MemImportLayer>(*dstInputSlot, compLayerName.c_str());
367  }
368 
369  compLayer->SetBackendId(dstLayer.GetBackendId());
370 
371  OutputSlot& compOutputSlot = compLayer->GetOutputSlot(0);
372  auto backendIt = backends.find(dstLayer.GetBackendId());
373  if (backendIt != backends.end() &&
374  backendIt->second &&
375  backendIt->second->SupportsTensorAllocatorAPI())
376  {
377  auto backend = backendIt->second.get();
378  auto tensorHandleFactoryIds = backend->GetHandleFactoryPreferences();
379  bool found = false;
380 
381  for (auto preference : tensorHandleFactoryIds)
382  {
383  auto factory = registry.GetFactory(preference);
384  if (factory)
385  {
386  auto srcPref = srcOutputSlot.GetTensorHandleFactoryId();
387  auto srcFactory = registry.GetFactory(srcPref);
388 
389  if (srcFactory)
390  {
391  bool canExportImport =
392  (factory->GetImportFlags() & srcFactory->GetExportFlags()) != 0;
393 
394  if (factory->SupportsMapUnmap() || canExportImport)
395  {
396  compOutputSlot.SetTensorHandleFactory(preference);
397  found = true;
398  break;
399  }
400  }
401  }
402  }
403 
404  if (!found)
405  {
406  compOutputSlot.SetTensorHandleFactory(ITensorHandleFactory::LegacyFactoryId);
407  }
408  }
409  else
410  {
411  compOutputSlot.SetTensorHandleFactory(ITensorHandleFactory::LegacyFactoryId);
412  }
413 
414  // The output strategy of a compatibility layer is always DirectCompatibility.
415  compOutputSlot.SetEdgeStrategy(0, EdgeStrategy::DirectCompatibility);
416 
417  // Recalculate the connection index on the previous layer as we have just inserted into it.
418  const std::vector<InputSlot*>& newSourceConnections = srcOutputSlot.GetConnections();
419  auto newSrcConnectionIndex = std::distance(newSourceConnections.begin(),
420  std::find(newSourceConnections.begin(),
421  newSourceConnections.end(),
422  &compLayer->GetInputSlot(0)));
423 
424  // The input strategy of a compatibility layer is always DirectCompatibilty.
425  srcOutputSlot.SetEdgeStrategy(armnn::numeric_cast<unsigned int>(newSrcConnectionIndex),
427  }
428  }
429  }
430  });
431 }
No strategy has been defined. Used internally to verify integrity of optimizations.
Source backends tensor data can be exported to destination backend tensor without copy...
Destination backend can work directly with tensors on source backend.
void ForEachLayer(Func func) const
Definition: Graph.hpp:40
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
static const FactoryId LegacyFactoryId

◆ AddLayer()

LayerT * AddLayer ( Args &&...  args)
inline

Adds a new layer, of type LayerType, to the graph constructed with the arguments passed.

Definition at line 420 of file Graph.hpp.

References armnn::Input, armnn::LayerAdded, and armnn::Output.

Referenced by ArgMinMaxInferOutputShapeImpl(), BatchToSpaceInferOutputShapeTest(), Layer::CloneBase(), Convolution2dInferOutputShapeTest(), Convolution3dInferOutputShapeTest(), CreatePreluLayerHelper(), CreateStackLayerHelper(), DepthwiseConvolution2dInferOutputShapeTest(), Pooling3dInferOutputShapeTest(), PreluInferOutputShapeImpl(), QLstmInferOutputShapeImpl(), QuantizedLstmInferOutputShapeImpl(), SpaceToDepthInferOutputShapeTest(), StackInferOutputShapeImpl(), TEST_SUITE(), TransposeConvolution2dInferOutputShapeTest(), and Graph::~Graph().

421 {
422  m_LayersInOrder = m_LayersInOrder &&
423  ((LayerEnumOf<LayerT>() == LayerType::Input) || (LayerEnumOf<LayerT>() == LayerType::Output));
424  LayerT* const layer = new LayerInGraph<LayerT>(*this, std::forward<Args>(args)...);
425 
426  layer->SetShapeInferenceMethod(m_ShapeInferenceMethod);
427 
428  NotifyObservables(GraphEvent::LayerAdded, layer);
429 
430  return layer;
431 }

◆ AllocateDynamicBuffers()

Status AllocateDynamicBuffers ( )

Allocates memory for all tensors under output tensor handers of each layer.

Definition at line 179 of file Graph.cpp.

References ITensorHandle::Allocate(), ARMNN_ASSERT, ARMNN_SCOPED_PROFILING_EVENT, armnn::Constant, ITensorHandle::GetParent(), ITensorHandle::Manage(), armnn::Success, and armnn::Undefined.

Referenced by Graph::GetNumLayers(), and TEST_SUITE().

180 {
181  // Layers must be sorted in topological order
182  ARMNN_ASSERT(m_LayersInOrder);
183  ARMNN_SCOPED_PROFILING_EVENT(Compute::Undefined, "LoadNetwork_AllocateDynamicBuffers");
184 
185  std::unordered_set<const ITensorHandle*> preallocatedTensors;
186  std::unordered_map<const ITensorHandle*, unsigned int> handleReferenceCounts;
187 
188  // Finds the first TensorHandle ancestor of a SubTensorHandle. If the ITensorHandle provided
189  // is a TensorHandle, the function just returns it
190  auto TraceSubTensorHandleAncestry = [](ITensorHandle* const subTensorHandle)
191  {
192  ITensorHandle* ancestor = subTensorHandle;
193  while (ancestor && ancestor->GetParent())
194  {
195  ancestor = ancestor->GetParent();
196  }
197  return ancestor;
198  };
199 
200  // Checks whether a TensorHandle has been pre-allocated
201  auto IsPreallocated = [&](ITensorHandle* const tensorHandle)
202  {
203  return tensorHandle && preallocatedTensors.find(tensorHandle) != preallocatedTensors.end();
204  };
205 
206  // Constant tensor handles need to last from the beginning of execution till the end,
207  // therefore we pre-allocate them upfront
208  for (auto&& layer : m_Layers)
209  {
210  if (layer->GetType() == LayerType::Constant)
211  {
212  for (auto&& slot = layer->BeginOutputSlots(); slot != layer->EndOutputSlots(); ++slot)
213  {
214  ITensorHandle *tensorHandle = TraceSubTensorHandleAncestry(slot->GetOutputHandler().GetData());
215 
216  if (tensorHandle && !IsPreallocated(tensorHandle))
217  {
218  tensorHandle->Allocate();
219  preallocatedTensors.insert(tensorHandle);
220  }
221  }
222  }
223  }
224 
225  // Iterate over the network in topological order
226  for (auto&& layer : m_Layers)
227  {
228  // Count the amount of times each output slot references a certain buffer (ITensorHandle).
229  // The first time we encounter a new tensor handle, we start managing its lifetime.
230  for (auto&& slot = layer->BeginOutputSlots(); slot != layer->EndOutputSlots(); ++slot)
231  {
232  ITensorHandle *tensorHandle = TraceSubTensorHandleAncestry(slot->GetOutputHandler().GetData());
233 
234  if (tensorHandle && !IsPreallocated(tensorHandle))
235  {
236  unsigned int numConnections = slot->GetNumConnections();
237  if (handleReferenceCounts.find(tensorHandle) == handleReferenceCounts.end())
238  {
239  handleReferenceCounts[tensorHandle] = numConnections;
240  tensorHandle->Manage();
241  if (handleReferenceCounts[tensorHandle] == 0u)
242  {
243  // if nobody consumes this tensor we call Allocate()
244  tensorHandle->Allocate();
245  }
246  }
247  else
248  {
249  handleReferenceCounts[tensorHandle] += numConnections;
250  }
251  }
252  }
253 
254  // Loop through the input slots in the same layer and decrement the reference counter associated
255  // to each tensor handle we encounter. Once it reaches zero, we end the lifetime of the tensor handle
256  for (auto&& slot = layer->BeginInputSlots(); slot != layer->EndInputSlots(); ++slot)
257  {
258  ITensorHandle *tensorHandle = TraceSubTensorHandleAncestry(
259  slot->GetConnectedOutputSlot()->GetOutputHandler().GetData());
260 
261  if (tensorHandle && !IsPreallocated(tensorHandle))
262  {
263  --handleReferenceCounts[tensorHandle];
264 
265  if (handleReferenceCounts[tensorHandle] == 0u)
266  {
267  // Stop managing lifetime of tensor handle
268  tensorHandle->Allocate();
269  handleReferenceCounts.erase(tensorHandle);
270  }
271  }
272  }
273  }
274 
275  return Status::Success;
276 }
#define ARMNN_SCOPED_PROFILING_EVENT(backendId, name)
Definition: Profiling.hpp:220
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14

◆ AttachObservable()

void AttachObservable ( IGraphObservable *const  observable,
GraphEvent  notifyOnEvent 
)
inline

Definition at line 217 of file Graph.hpp.

Referenced by GraphObservable< Layer *>::GraphObservable().

217  {
218  m_Views[notifyOnEvent].emplace_back(observable);
219  }

◆ begin() [1/2]

Iterator begin ( )
inline

Returns iterator pointing to the beginning of the list. Lowercase for range-based for loops.

Definition at line 167 of file Graph.hpp.

Referenced by armnn::Optimize(), Optimizer::Pass(), and TEST_SUITE().

167 { return m_Layers.begin(); }

◆ begin() [2/2]

ConstIterator begin ( ) const
inline

Returns const iterator pointing to the beginning of the list. Lowercase for range-based for loops.

Definition at line 172 of file Graph.hpp.

172 { return {m_Layers.begin(), &(PtrCast<const Layer>)}; }

◆ cbegin()

ConstIterator cbegin ( ) const
inline

Returns const iterator pointing to the beginning of the list. Lowercase for range-based for loops.

Definition at line 177 of file Graph.hpp.

References Graph::InputLayersAccessor::begin().

Referenced by TEST_SUITE().

177 { return begin(); }
Iterator begin()
Returns iterator pointing to the beginning of the list. Lowercase for range-based for loops...
Definition: Graph.hpp:167

◆ cend()

ConstIterator cend ( ) const
inline

Returns const iterator pointing to the end of the list. Lowercase for range-based for loops.

Definition at line 179 of file Graph.hpp.

References Graph::InputLayersAccessor::end().

Referenced by TEST_SUITE().

179 { return end(); }
Iterator end()
Returns iterator pointing to the end of the list. Lowercase for range-based for loops.
Definition: Graph.hpp:169

◆ DetachObservable()

void DetachObservable ( IGraphObservable *const  observable,
GraphEvent  notifyOnEvent 
)
inline

Definition at line 221 of file Graph.hpp.

References Graph::GetPosInGraph(), Graph::GetProfiler(), armnn::Input, and armnn::Output.

Referenced by GraphObservable< Layer *>::~GraphObservable().

221  {
222  m_Views[notifyOnEvent].remove(observable);
223  }

◆ end() [1/2]

Iterator end ( )
inline

Returns iterator pointing to the end of the list. Lowercase for range-based for loops.

Definition at line 169 of file Graph.hpp.

Referenced by armnn::Optimize(), Optimizer::Pass(), and TEST_SUITE().

169 { return m_Layers.end(); }

◆ end() [2/2]

ConstIterator end ( ) const
inline

Returns const iterator pointing to the end of the list. Lowercase for range-based for loops.

Definition at line 174 of file Graph.hpp.

174 { return {m_Layers.end(), &(PtrCast<const Layer>)}; }

◆ EraseLayer() [1/2]

void EraseLayer ( Iterator  pos)
inline

◆ EraseLayer() [2/2]

void EraseLayer ( LayerT *&  layer)
inline

Deletes the layer.

Sets layer to nullptr on return. Templated to support pointers to any layer type.

Definition at line 475 of file Graph.hpp.

References ARMNN_ASSERT, Graph::EraseLayer(), and Graph::GetPosInGraph().

476 {
477  ARMNN_ASSERT(layer != nullptr);
478  EraseLayer(GetPosInGraph(*layer));
479  layer = nullptr;
480 }
void EraseLayer(Iterator pos)
Deletes the layer at the specified position.
Definition: Graph.hpp:467
Iterator GetPosInGraph(Layer &layer)
Gets the position of a layer in the graph.
Definition: Graph.hpp:412
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14

◆ ForEachLayer()

void ForEachLayer ( Func  func) const
inline

Definition at line 40 of file Graph.hpp.

Referenced by Graph::AddCompatibilityLayers(), Graph::operator=(), armnn::SelectTensorHandleStrategy(), TEST_SUITE(), and Graph::~Graph().

41  {
42  for (auto it = m_Layers.begin(); it != m_Layers.end(); )
43  {
44  auto next = std::next(it);
45  func(*it);
46  it = next;
47  }
48  }

◆ GetInputLayers()

InputLayersAccessor GetInputLayers ( ) const
inline

Returns a wrapper object with begin(), end() methods to iterate over the input layers in a range-based for loop.

Definition at line 190 of file Graph.hpp.

References Graph::InputLayersAccessor::InputLayersAccessor().

Referenced by LoadedNetwork::EnqueueWorkload(), and LoadedNetwork::ImportInputs().

190 { return InputLayersAccessor(*this); }

◆ GetNumInputs()

size_t GetNumInputs ( ) const
inline

Definition at line 185 of file Graph.hpp.

Referenced by Graph::InputLayersAccessor::end(), LoadedNetwork::EnqueueWorkload(), LoadedNetwork::Execute(), and LoadedNetwork::MakeLoadedNetwork().

185 { return m_InputIds.size(); }

◆ GetNumLayers()

size_t GetNumLayers ( ) const
inline

◆ GetNumOutputs()

size_t GetNumOutputs ( ) const
inline

Definition at line 186 of file Graph.hpp.

Referenced by Graph::OutputLayersAccessor::begin(), LoadedNetwork::EnqueueWorkload(), LoadedNetwork::Execute(), and LoadedNetwork::MakeLoadedNetwork().

186 { return m_OutputIds.size(); }

◆ GetOutputLayers()

OutputLayersAccessor GetOutputLayers ( ) const
inline

Returns a wrapper object with begin(), end() methods to iterate over the output layers in a range-based for loop.

Definition at line 194 of file Graph.hpp.

Referenced by LoadedNetwork::EnqueueWorkload(), and LoadedNetwork::ImportOutputs().

194 { return OutputLayersAccessor(*this); }

◆ GetPosInGraph()

Graph::Iterator GetPosInGraph ( Layer layer)
inline

Gets the position of a layer in the graph.

Definition at line 412 of file Graph.hpp.

References ARMNN_ASSERT.

Referenced by Graph::DetachObservable(), Graph::EraseLayer(), Graph::InsertNewLayer(), and Optimizer::Pass().

413 {
414  auto it = m_PosInGraphMap.find(&layer);
415  ARMNN_ASSERT(it != m_PosInGraphMap.end());
416  return it->second;
417 }
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14

◆ GetProfiler()

const std::shared_ptr< IProfiler > & GetProfiler ( ) const

Definition at line 643 of file Graph.cpp.

Referenced by Graph::DetachObservable().

644 {
645  return m_Profiler;
646 }

◆ InferTensorInfos()

void InferTensorInfos ( )

Definition at line 560 of file Graph.cpp.

References armnn::Convolution3d, armnn::FullyConnected, InputSlot::GetConnectedOutputSlot(), Layer::GetInputSlot(), armnn::GetLayerTypeAsCString(), Layer::GetName(), Layer::GetNumInputSlots(), Layer::GetType(), IOutputSlot::IsTensorInfoSet(), Graph::TopologicalSort(), and armnn::ValidateOnly.

Referenced by Graph::GetNumLayers(), armnn::Optimize(), PreluValidateTensorShapesFromInputsMatchTest(), PreluValidateTensorShapesFromInputsNoMatchTest(), StackValidateTensorShapesFromInputsMatchTest(), StackValidateTensorShapesFromInputsNoMatchTest(), and TEST_SUITE().

561 {
562  for (auto&& layer : TopologicalSort())
563  {
564  for (auto&& input : layer->GetInputSlots())
565  {
566  const IOutputSlot* source = input.GetConnectedOutputSlot();
567  if (source == NULL)
568  {
569  // Throws exception due to a layer input not being connected to an output slot.
570  // Verifies input slot weights and bias are set for FullyConnected layers.
571  ConstructErrorMessageForUnconnectedInputs(layer, input.GetSlotIndex());
572  }
573 
574  if (!source->IsTensorInfoSet())
575  {
576  std::ostringstream message;
577  message << "Output slot TensorInfo not set on "
578  << GetLayerTypeAsCString(layer->GetType())
579  << " layer "
580  << std::quoted(layer->GetName());
581  throw LayerValidationException(message.str());
582  }
583  }
584 
585  if (layer->m_ShapeInferenceMethod == ShapeInferenceMethod::ValidateOnly)
586  {
587  layer->ValidateTensorShapesFromInputs();
588  }
589  }
590 }
Validate all output shapes.
Graph & TopologicalSort()
Sorts layers in topological order and return this.
Definition: Graph.hpp:182
const char * GetLayerTypeAsCString(LayerType type)

◆ InsertNewLayer() [1/2]

LayerT * InsertNewLayer ( InputSlot insertBefore,
Args &&...  args 
)
inline

Inserts a new layer between the output slot currently connected to insertBefore and insertBefore itself.

Definition at line 434 of file Graph.hpp.

References InputSlot::GetConnectedOutputSlot(), InputSlot::GetOwningLayer(), OutputSlot::GetOwningLayer(), Graph::GetPosInGraph(), InputSlot::Insert(), and armnn::LayerAdded.

Referenced by armnn::optimizations::pad_fold::FoldPadIntoLayer2dImpl(), armnn::InsertConvertBf16ToFp32LayersBefore(), armnn::InsertConvertFp16ToFp32LayersBefore(), armnn::InsertConvertFp32ToBf16LayersAfter(), armnn::InsertConvertFp32ToBf16LayersBefore(), armnn::InsertConvertFp32ToFp16LayersAfter(), armnn::InsertDebugLayerAfter(), PermuteAsReshapeImpl::Run(), TransposeAsReshapeImpl::Run(), OptimizeConsecutiveReshapesImpl::Run(), MoveTransposeUpImpl::Run(), MovePermuteUpImpl::Run(), FuseBatchNorm< ConvLayer, ArmnnType, T >::Run(), AddBroadcastReshapeLayerImpl::Run(), TEST_SUITE(), and Graph::~Graph().

435 {
436  // Insert after the parent if any, or before the child otherwise, so the topological order is kept.
437  OutputSlot* parentOut = insertBefore.GetConnectedOutputSlot();
438  const Iterator pos = (parentOut != nullptr)
439  ? std::next(GetPosInGraph(parentOut->GetOwningLayer()))
440  : GetPosInGraph(insertBefore.GetOwningLayer());
441  LayerT* const layer = new LayerInGraph<LayerT>(*this, pos, std::forward<Args>(args)...);
442  insertBefore.Insert(*layer);
443 
444  NotifyObservables(GraphEvent::LayerAdded, layer);
445 
446  return layer;
447 }
LayerList::const_iterator Iterator
Definition: Graph.hpp:53
Iterator GetPosInGraph(Layer &layer)
Gets the position of a layer in the graph.
Definition: Graph.hpp:412

◆ InsertNewLayer() [2/2]

LayerT * InsertNewLayer ( OutputSlot insertAfter,
Args &&...  args 
)
inline

Inserts a new layer between insertAfter and the input slot(s) currently connected to it.

Definition at line 450 of file Graph.hpp.

References ARMNN_ASSERT, OutputSlot::Connect(), OutputSlot::GetOwningLayer(), Graph::GetPosInGraph(), armnn::LayerAdded, and OutputSlot::MoveAllConnections().

451 {
452  Layer& owningLayer = insertAfter.GetOwningLayer();
453 
454  const Iterator pos = std::next(GetPosInGraph(owningLayer));
455  LayerT* const layer = new LayerInGraph<LayerT>(*this, pos, std::forward<Args>(args)...);
456 
457  ARMNN_ASSERT(layer->GetNumInputSlots() == 1);
458 
459  insertAfter.MoveAllConnections(layer->GetOutputSlot());
460  insertAfter.Connect(layer->GetInputSlot(0));
461 
462  NotifyObservables(GraphEvent::LayerAdded, layer);
463 
464  return layer;
465 }
LayerList::const_iterator Iterator
Definition: Graph.hpp:53
Iterator GetPosInGraph(Layer &layer)
Gets the position of a layer in the graph.
Definition: Graph.hpp:412
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14

◆ operator=() [1/2]

Graph& operator= ( const Graph other)
delete

Referenced by Graph::Graph().

◆ operator=() [2/2]

Graph& operator= ( Graph &&  other)
inline

Definition at line 114 of file Graph.hpp.

References ARMNN_ASSERT, Graph::ForEachLayer(), and Layer::Reparent().

115  {
116  m_InputIds = std::move(other.m_InputIds);
117  m_OutputIds = std::move(other.m_OutputIds);
118  m_LayersInOrder = std::move(other.m_LayersInOrder);
119  m_Views = std::move(other.m_Views);
120  m_Profiler = std::move(other.m_Profiler);
121 
122  other.ForEachLayer([this](Layer* otherLayer)
123  {
124  otherLayer->Reparent(*this, m_Layers.end());
125  });
126 
127  ARMNN_ASSERT(other.m_PosInGraphMap.empty());
128  ARMNN_ASSERT(other.m_Layers.empty());
129 
130  return *this;
131  }
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14

◆ Print()

Status Print ( ) const

Definition at line 61 of file Graph.cpp.

References ARMNN_LOG, armnn::GetLayerTypeAsCString(), Layer::GetOutputSlots(), armnn::info, armnn::Success, and Graph::TopologicalSort().

Referenced by CheckOrder(), and Graph::~Graph().

62 {
63  if (m_Layers.empty())
64  {
65  ARMNN_LOG(info) << "\n Graph is empty.\n";
66  return Status::Success;
67  }
68  ARMNN_LOG(info) << "\n";
69  ARMNN_LOG(info) << "Walking Pattern: \n";
70 
71  for (auto&& it : TopologicalSort())
72  {
73  auto numInputSlots = it->GetNumInputSlots();
74  auto numOutputSlots = it->GetNumOutputSlots();
75 
76  ARMNN_LOG(info) << it->GetName() << ":" << GetLayerTypeAsCString(it->GetType())
77  << ":" << it->GetBackendId().Get()
78  << " has " << numInputSlots << " input slots"
79  << " and " << numOutputSlots << " output slots.";
80 
81  for (auto i : it->GetInputSlots())
82  {
83  std::ostringstream message;
84  auto inputTensorShape = i.GetConnectedOutputSlot()->GetTensorInfo().GetShape();
85  unsigned int numDims = inputTensorShape.GetNumDimensions();
86 
87  message << "The input slot has shape [ ";
88  for (unsigned int dim=0; dim < numDims; dim++)
89  {
90  message << inputTensorShape[dim] << ",";
91  }
92  message << " ]";
93  ARMNN_LOG(info) << message.str();
94  }
95 
96  for (unsigned int i = 0; i < it->GetNumOutputSlots(); i++)
97  {
98  const armnn::Layer *layer = it;
99  std::ostringstream message;
100  auto outputTensorShape = layer->GetOutputSlots()[i].GetTensorInfo().GetShape();
101  unsigned int numDims = outputTensorShape.GetNumDimensions();
102 
103  message << "The output slot has shape [ ";
104  for (unsigned int dim=0; dim < numDims; dim++)
105  {
106  message << outputTensorShape[dim] << ",";
107  }
108  message << " ]";
109  ARMNN_LOG(info) << message.str();
110  }
111  ARMNN_LOG(info) << "\n";
112  }
113  ARMNN_LOG(info) << "\n\n";
114 
115  return Status::Success;
116 }
#define ARMNN_LOG(severity)
Definition: Logging.hpp:205
const std::vector< OutputSlot > & GetOutputSlots() const
Definition: Layer.hpp:243
Graph & TopologicalSort()
Sorts layers in topological order and return this.
Definition: Graph.hpp:182
const char * GetLayerTypeAsCString(LayerType type)

◆ PtrCast()

static LayerType* PtrCast ( Layer *const  layer)
inlinestatic

Definition at line 34 of file Graph.hpp.

35  {
36  return PolymorphicDowncast<LayerType*>(layer);
37  }

◆ SerializeToDot()

Status SerializeToDot ( std::ostream &  stream)

Definition at line 118 of file Graph.cpp.

References DotAttributeSet::AddAttribute(), NodeContent::AddContent(), armnn::Failure, DotEdge::GetAttributeSet(), DotDefaults::GetAttributeSet(), DotNode::GetContents(), Layer::GetGuid(), armnn::GetLayerTypeAsCString(), OutputSlot::GetOwningLayer(), TensorInfo::GetShape(), OutputSlot::GetTensorInfo(), and armnn::Success.

Referenced by TEST_SUITE(), and Graph::~Graph().

119 {
120  {
121  DotGraph graph(stream, "Optimized");
122 
123  {
124  // Default node attributes:
125  DotDefaults nodes(stream, "node");
126  nodes.GetAttributeSet()
127  .AddAttribute("shape", "record");
128  }
129 
130  {
131  // Default edge attributes:
132  DotDefaults edges(stream, "edge");
133  edges.GetAttributeSet()
134  .AddAttribute("fontsize", 8)
135  .AddAttribute("fontcolor", "blue")
136  .AddAttribute("fontname", "arial-bold");
137  }
138 
139  // First declares the nodes.
140  for (auto&& layer : m_Layers)
141  {
142  DotNode node(stream, layer->GetGuid(), GetLayerTypeAsCString(layer->GetType()));
143  // Extracts the layer parameters.
144  ParameterStringifyFunction extractParams = [&node](const std::string & name, const std::string & value){
145  node.GetContents().AddContent(name + " : " + value);
146  };
147  layer->SerializeLayerParameters(extractParams);
148  }
149 
150  // Second declares the edges.
151  for (auto&& layer : m_Layers)
152  {
153  LayerGuid toId = layer->GetGuid();
154 
155  for (unsigned int i=0;i<layer->GetNumInputSlots(); i++)
156  {
157  OutputSlot* outputSlot = static_cast<OutputSlot*>(layer->GetInputSlot(i).GetConnection());
158  LayerGuid fromId = outputSlot->GetOwningLayer().GetGuid();
159  DotEdge edge(stream, fromId, toId);
160 
161  // Now print the tensor shape on the edge.
162  {
163  // Constructs the label attribute with HTML markup.
164  std::stringstream ss;
165  ss << "< " << outputSlot->GetTensorInfo().GetShape() << " >";
166  edge.GetAttributeSet().AddAttribute("label", ss);
167  }
168  }
169  }
170  }
171 
172  if (stream.bad())
173  {
174  return Status::Failure;
175  }
176  return Status::Success;
177 }
profiling::ProfilingGuid LayerGuid
Define LayerGuid type.
Definition: Types.hpp:363
std::function< void(const std::string &name, const std::string &value)> ParameterStringifyFunction
const char * GetLayerTypeAsCString(LayerType type)

◆ SubstituteSubgraph() [1/2]

void SubstituteSubgraph ( SubgraphView subgraph,
IConnectableLayer substituteLayer 
)

Substitutes the given sub-graph with either a new layer or a new sub-graph.

In either case, the given layer or all the layers in the given sub-graph must belong to this graph.

Definition at line 433 of file Graph.cpp.

References ARMNN_ASSERT.

Referenced by armnn::ApplyBackendOptimizations(), Graph::GetNumLayers(), and TEST_SUITE().

434 {
435  ARMNN_ASSERT(substituteLayer != nullptr);
436 
437  // Create a new sub-graph with only the given layer, using
438  // the given sub-graph as a reference of which parent graph to use
439  SubgraphView substituteSubgraph(substituteLayer);
440 
441  SubstituteSubgraph(subgraph, substituteSubgraph);
442 }
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
void SubstituteSubgraph(SubgraphView &subgraph, IConnectableLayer *substituteLayer)
Substitutes the given sub-graph with either a new layer or a new sub-graph.
Definition: Graph.cpp:433
friend class SubgraphView
Definition: Graph.hpp:284

◆ SubstituteSubgraph() [2/2]

void SubstituteSubgraph ( SubgraphView subgraph,
const SubgraphView substituteSubgraph 
)

Definition at line 444 of file Graph.cpp.

References ARMNN_ASSERT, ARMNN_ASSERT_MSG, SubgraphView::Clear(), IOutputSlot::Connect(), IOutputSlot::Disconnect(), Graph::EraseLayer(), SubgraphView::ForEachIConnectableLayer(), IInputSlot::GetConnection(), SubgraphView::GetIConnectableLayers(), SubgraphView::GetIInputSlots(), SubgraphView::GetIOutputSlots(), armnn::IgnoreUnused(), armnn::numeric_cast(), and Graph::TopologicalSort().

445 {
446  // Look through each layer in the new subgraph and add any that are not already a member of this graph
447  substituteSubgraph.ForEachIConnectableLayer([this](IConnectableLayer* iConnectableLayer)
448  {
449  if (std::find(std::begin(m_Layers),
450  std::end(m_Layers),
451  iConnectableLayer) == std::end(m_Layers))
452  {
453  auto layer = PolymorphicDowncast<Layer*>(iConnectableLayer);
454  layer->Reparent(*this, m_Layers.end());
455  m_LayersInOrder = false;
456  }
457  });
458 
459  ReplaceSubgraphConnections(subgraph, substituteSubgraph);
460  EraseSubgraphLayers(subgraph);
461  TopologicalSort();
462 }
Graph & TopologicalSort()
Sorts layers in topological order and return this.
Definition: Graph.hpp:182

◆ TopologicalSort() [1/2]

Graph& TopologicalSort ( )
inline

Sorts layers in topological order and return this.

Definition at line 182 of file Graph.hpp.

References Graph::TopologicalSort().

Referenced by CheckOrder(), LoadedNetwork::ImportInputs(), LoadedNetwork::ImportOutputs(), Graph::InferTensorInfos(), LoadedNetwork::MakeLoadedNetwork(), Optimizer::Pass(), Graph::Print(), LoadedNetwork::RegisterDebugCallback(), LoadedNetwork::SendNetworkStructure(), Graph::SubstituteSubgraph(), TEST_SUITE(), Graph::TopologicalSort(), and Graph::VerifyConstantLayerSetTensorInfo().

182 { const_cast<const Graph*>(this)->TopologicalSort(); return *this; }
Graph(bool shapeInferenceMethod=false)
Definition: Graph.hpp:98
Graph & TopologicalSort()
Sorts layers in topological order and return this.
Definition: Graph.hpp:182

◆ TopologicalSort() [2/2]

const Graph & TopologicalSort ( ) const

Definition at line 278 of file Graph.cpp.

279 {
280  if (!m_LayersInOrder)
281  {
282  // Resets layer order.
283  for (auto&& it : m_Layers)
284  {
285  it->ResetPriority();
286  }
287 
288  auto compareLayerPriority = [](const LayerList::value_type& layerA, const LayerList::value_type& layerB)
289  {
290  return layerA->GetPriority() < layerB->GetPriority();
291  };
292 
293  m_Layers.sort(compareLayerPriority);
294 
295  m_LayersInOrder = true;
296  }
297 
298  return *this;
299 }

◆ VerifyConstantLayerSetTensorInfo()

void VerifyConstantLayerSetTensorInfo ( ) const

For each ConstantLayer in Graph, ensures TensorInfo is set on all output slots.

LayerValidationException thrown if no TensorInfo is set.

LayerValidationException thrown if no TensorInfo is set.

Exceptions
LayerValidationException

Definition at line 537 of file Graph.cpp.

References armnn::Constant, armnn::GetLayerTypeAsCString(), and Graph::TopologicalSort().

Referenced by Graph::GetNumLayers().

538 {
539  for (auto&& layer : TopologicalSort())
540  {
541  if (layer->GetType() == armnn::LayerType::Constant)
542  {
543  for (auto&& output: layer->GetOutputSlots())
544  {
545  if (!output.IsTensorInfoSet())
546  {
547  std::ostringstream message;
548  message << "Output slot TensorInfo not set on "
549  << GetLayerTypeAsCString(layer->GetType())
550  << " layer \""
551  << layer->GetName()
552  << "\"";
553  throw LayerValidationException(message.str());
554  }
555  }
556  }
557  }
558 }
Graph & TopologicalSort()
Sorts layers in topological order and return this.
Definition: Graph.hpp:182
const char * GetLayerTypeAsCString(LayerType type)

Friends And Related Function Documentation

◆ SubgraphView

friend class SubgraphView
friend

Definition at line 284 of file Graph.hpp.


The documentation for this class was generated from the following files: