ArmNN
 22.02
QuantizedLstmLayer Class Reference

This layer represents a QuantizedLstm operation. More...

#include <QuantizedLstmLayer.hpp>

Inheritance diagram for QuantizedLstmLayer:
Layer IConnectableLayer

Public Member Functions

virtual std::unique_ptr< IWorkloadCreateWorkload (const IWorkloadFactory &factory) const override
 Makes a workload for the QuantizedLstm type. More...
 
QuantizedLstmLayerClone (Graph &graph) const override
 Creates a dynamically-allocated copy of this layer. More...
 
void ValidateTensorShapesFromInputs () override
 Check if the input tensor shape(s) will lead to a valid configuration of QuantizedLstmLayer. More...
 
std::vector< TensorShapeInferOutputShapes (const std::vector< TensorShape > &inputShapes) const override
 By default returns inputShapes if the number of inputs are equal to number of outputs, otherwise infers the output shapes from given input shapes and layer properties. More...
 
ARMNN_NO_DEPRECATE_WARN_BEGIN void Accept (ILayerVisitor &visitor) const override
 
ARMNN_NO_DEPRECATE_WARN_END void ExecuteStrategy (IStrategy &strategy) const override
 Apply a visitor to this layer. More...
 
- Public Member Functions inherited from Layer
 Layer (unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, const char *name)
 
 Layer (unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, DataLayout layout, const char *name)
 
const std::string & GetNameStr () const
 
const OutputHandlerGetOutputHandler (unsigned int i=0) const
 
OutputHandlerGetOutputHandler (unsigned int i=0)
 
ShapeInferenceMethod GetShapeInferenceMethod () const
 
const std::vector< InputSlot > & GetInputSlots () const
 
const std::vector< OutputSlot > & GetOutputSlots () const
 
std::vector< InputSlot >::iterator BeginInputSlots ()
 
std::vector< InputSlot >::iterator EndInputSlots ()
 
std::vector< OutputSlot >::iterator BeginOutputSlots ()
 
std::vector< OutputSlot >::iterator EndOutputSlots ()
 
bool IsOutputUnconnected ()
 
void ResetPriority () const
 
LayerPriority GetPriority () const
 
LayerType GetType () const override
 Returns the armnn::LayerType of this layer. More...
 
DataType GetDataType () const
 
const BackendIdGetBackendId () const
 
void SetBackendId (const BackendId &id)
 
virtual void CreateTensorHandles (const TensorHandleFactoryRegistry &registry, const IWorkloadFactory &factory, const bool IsMemoryManaged=true)
 
void VerifyLayerConnections (unsigned int expectedConnections, const CheckLocation &location) const
 
virtual void SerializeLayerParameters (ParameterStringifyFunction &fn) const
 Helper to serialize the layer parameters to string. More...
 
virtual void ReleaseConstantData ()
 
template<typename Op >
void OperateOnConstantTensors (Op op)
 
const char * GetName () const override
 Returns the name of the layer. More...
 
unsigned int GetNumInputSlots () const override
 Returns the number of connectable input slots. More...
 
unsigned int GetNumOutputSlots () const override
 Returns the number of connectable output slots. More...
 
const InputSlotGetInputSlot (unsigned int index) const override
 Get a const input slot handle by slot index. More...
 
InputSlotGetInputSlot (unsigned int index) override
 Get the input slot handle by slot index. More...
 
const OutputSlotGetOutputSlot (unsigned int index=0) const override
 Get the const output slot handle by slot index. More...
 
OutputSlotGetOutputSlot (unsigned int index=0) override
 Get the output slot handle by slot index. More...
 
void SetGuid (LayerGuid guid)
 
LayerGuid GetGuid () const final
 Returns the unique id of the layer. More...
 
void AddRelatedLayerName (const std::string layerName)
 
const std::list< std::string > & GetRelatedLayerNames ()
 
virtual void Reparent (Graph &dest, std::list< Layer *>::const_iterator iterator)=0
 
void BackendSelectionHint (Optional< BackendId > backend) final
 Provide a hint for the optimizer as to which backend to prefer for this layer. More...
 
Optional< BackendIdGetBackendHint () const
 
void SetShapeInferenceMethod (ShapeInferenceMethod shapeInferenceMethod)
 
template<typename T >
std::shared_ptr< T > GetAdditionalInformation () const
 
void SetAdditionalInfoForObject (const AdditionalInfoObjectPtr &additionalInfo)
 
virtual const BaseDescriptorGetParameters () const override
 If the layer has a descriptor return it. More...
 
- Public Member Functions inherited from IConnectableLayer
ARMNN_NO_DEPRECATE_WARN_BEGIN ARMNN_DEPRECATED_MSG_REMOVAL_DATE ("Accept is deprecated. The ILayerVisitor that works in conjunction with this " "Accept function is deprecated. Use IStrategy in combination with " "ExecuteStrategy instead, which is an ABI/API stable version of the " "visitor pattern.", "22.05") virtual void Accept(ILayerVisitor &visitor) const =0
 Apply a visitor to this layer. More...
 

Public Attributes

QuantizedLstmParameters m_QuantizedLstmParameters
 

Protected Member Functions

 QuantizedLstmLayer (const char *name)
 Constructor to create a QuantizedLstmLayer. More...
 
 ~QuantizedLstmLayer ()=default
 Default destructor. More...
 
Layer::ConstantTensors GetConstantTensorsByRef () override
 Retrieve the handles to the constant values stored by the layer. More...
 
- Protected Member Functions inherited from Layer
virtual ~Layer ()=default
 
template<typename QueueDescriptor >
void CollectQueueDescriptorInputs (QueueDescriptor &descriptor, WorkloadInfo &info) const
 
template<typename QueueDescriptor >
void CollectQueueDescriptorOutputs (QueueDescriptor &descriptor, WorkloadInfo &info) const
 
void ValidateAndCopyShape (const TensorShape &outputShape, const TensorShape &inferredShape, const ShapeInferenceMethod shapeInferenceMethod, const std::string &layerName, const unsigned int outputSlotIndex=0)
 
void VerifyShapeInferenceType (const TensorShape &outputShape, ShapeInferenceMethod shapeInferenceMethod)
 
template<typename QueueDescriptor >
WorkloadInfo PrepInfoAndDesc (QueueDescriptor &descriptor) const
 Helper function to reduce duplication in *LayerCreateWorkload. More...
 
template<typename LayerType , typename ... Params>
LayerTypeCloneBase (Graph &graph, Params &&... params) const
 
void SetAdditionalInfo (QueueDescriptor &descriptor) const
 
- Protected Member Functions inherited from IConnectableLayer
 ~IConnectableLayer ()
 Objects are not deletable via the handle. More...
 

Additional Inherited Members

- Public Types inherited from IConnectableLayer
using ConstantTensors = std::vector< std::reference_wrapper< std::shared_ptr< ConstTensorHandle > >>
 
- Protected Attributes inherited from Layer
AdditionalInfoObjectPtr m_AdditionalInfoObject
 
std::vector< OutputHandlerm_OutputHandlers
 
ShapeInferenceMethod m_ShapeInferenceMethod
 

Detailed Description

This layer represents a QuantizedLstm operation.

Definition at line 45 of file QuantizedLstmLayer.hpp.

Constructor & Destructor Documentation

◆ QuantizedLstmLayer()

QuantizedLstmLayer ( const char *  name)
protected

Constructor to create a QuantizedLstmLayer.

Parameters
[in]nameOptional name for the layer.

Definition at line 17 of file QuantizedLstmLayer.cpp.

References armnn::QuantizedLstm.

18  : Layer(3, 2, LayerType::QuantizedLstm, name)
19 {
20 }
Layer(unsigned int numInputSlots, unsigned int numOutputSlots, LayerType type, const char *name)
Definition: Layer.cpp:221

◆ ~QuantizedLstmLayer()

~QuantizedLstmLayer ( )
protecteddefault

Default destructor.

Member Function Documentation

◆ Accept()

ARMNN_NO_DEPRECATE_WARN_BEGIN void Accept ( ILayerVisitor &  visitor) const
override

Definition at line 174 of file QuantizedLstmLayer.cpp.

References ARMNN_NO_DEPRECATE_WARN_END, Layer::GetName(), ManagedConstTensorHandle::GetTensorInfo(), QuantizedLstmParameters::m_CellBias, QuantizedLstmInputParams::m_CellBias, QuantizedLstmParameters::m_ForgetGateBias, QuantizedLstmInputParams::m_ForgetGateBias, QuantizedLstmParameters::m_InputGateBias, QuantizedLstmInputParams::m_InputGateBias, QuantizedLstmParameters::m_InputToCellWeights, QuantizedLstmInputParams::m_InputToCellWeights, QuantizedLstmParameters::m_InputToForgetWeights, QuantizedLstmInputParams::m_InputToForgetWeights, QuantizedLstmParameters::m_InputToInputWeights, QuantizedLstmInputParams::m_InputToInputWeights, QuantizedLstmParameters::m_InputToOutputWeights, QuantizedLstmInputParams::m_InputToOutputWeights, QuantizedLstmParameters::m_OutputGateBias, QuantizedLstmInputParams::m_OutputGateBias, QuantizedLstmLayer::m_QuantizedLstmParameters, QuantizedLstmParameters::m_RecurrentToCellWeights, QuantizedLstmInputParams::m_RecurrentToCellWeights, QuantizedLstmParameters::m_RecurrentToForgetWeights, QuantizedLstmInputParams::m_RecurrentToForgetWeights, QuantizedLstmParameters::m_RecurrentToInputWeights, QuantizedLstmInputParams::m_RecurrentToInputWeights, QuantizedLstmParameters::m_RecurrentToOutputWeights, QuantizedLstmInputParams::m_RecurrentToOutputWeights, and ManagedConstTensorHandle::Map().

175 {
176  QuantizedLstmInputParams inputParams;
177 
178  ManagedConstTensorHandle managedInputToInputWeights(m_QuantizedLstmParameters.m_InputToInputWeights);
179  ManagedConstTensorHandle managedInputToForgetWeights(m_QuantizedLstmParameters.m_InputToForgetWeights);
180  ManagedConstTensorHandle managedInputToCellWeights(m_QuantizedLstmParameters.m_InputToCellWeights);
181  ManagedConstTensorHandle managedInputToOutputWeights(m_QuantizedLstmParameters.m_InputToOutputWeights);
182 
183  ManagedConstTensorHandle managedRecurrentToInputWeights(m_QuantizedLstmParameters.m_RecurrentToInputWeights);
184  ManagedConstTensorHandle managedRecurrentToForgetWeights(m_QuantizedLstmParameters.m_RecurrentToForgetWeights);
185  ManagedConstTensorHandle managedRecurrentToCellWeights(m_QuantizedLstmParameters.m_RecurrentToCellWeights);
186  ManagedConstTensorHandle managedRecurrentToOutputWeights(m_QuantizedLstmParameters.m_RecurrentToOutputWeights);
187 
188  ManagedConstTensorHandle managedInputGateBias(m_QuantizedLstmParameters.m_InputGateBias);
189  ManagedConstTensorHandle managedForgetGateBias(m_QuantizedLstmParameters.m_ForgetGateBias);
190  ManagedConstTensorHandle managedCellBias(m_QuantizedLstmParameters.m_CellBias);
191  ManagedConstTensorHandle managedOutputGateBias(m_QuantizedLstmParameters.m_OutputGateBias);
192 
193  // InputToX weight tensors
194  ConstTensor inputToInputWeightsTensor;
196  {
197  ConstTensor inputToInputWeightsTensorCopy(managedInputToInputWeights.GetTensorInfo(),
198  managedInputToInputWeights.Map());
199  inputToInputWeightsTensor = inputToInputWeightsTensorCopy;
200  inputParams.m_InputToInputWeights = &inputToInputWeightsTensor;
201  }
202 
203  ConstTensor inputToForgetWeightsTensor;
205  {
206  ConstTensor inputToForgetWeightsTensorCopy(managedInputToForgetWeights.GetTensorInfo(),
207  managedInputToForgetWeights.Map());
208  inputToForgetWeightsTensor = inputToForgetWeightsTensorCopy;
209  inputParams.m_InputToForgetWeights = &inputToForgetWeightsTensor;
210  }
211 
212  ConstTensor inputToCellWeightsTensor;
214  {
215  ConstTensor inputToCellWeightsTensorCopy(managedInputToCellWeights.GetTensorInfo(),
216  managedInputToCellWeights.Map());
217  inputToCellWeightsTensor = inputToCellWeightsTensorCopy;
218  inputParams.m_InputToCellWeights = &inputToCellWeightsTensor;
219  }
220 
221  ConstTensor inputToOutputWeightsTensor;
223  {
224  ConstTensor inputToOutputWeightsTensorCopy(managedInputToOutputWeights.GetTensorInfo(),
225  managedInputToOutputWeights.Map());
226  inputToOutputWeightsTensor = inputToOutputWeightsTensorCopy;
227  inputParams.m_InputToOutputWeights = &inputToOutputWeightsTensor;
228  }
229 
230  // RecurrentToX weight tensors
231  ConstTensor recurrentToInputWeightsTensor;
233  {
234  ConstTensor recurrentToInputWeightsTensorCopy(
235  managedRecurrentToInputWeights.GetTensorInfo(),
236  managedRecurrentToInputWeights.Map());
237  recurrentToInputWeightsTensor = recurrentToInputWeightsTensorCopy;
238  inputParams.m_RecurrentToInputWeights = &recurrentToInputWeightsTensor;
239  }
240 
241  ConstTensor recurrentToForgetWeightsTensor;
243  {
244  ConstTensor recurrentToForgetWeightsTensorCopy(
245  managedRecurrentToForgetWeights.GetTensorInfo(),
246  managedRecurrentToForgetWeights.Map());
247  recurrentToForgetWeightsTensor = recurrentToForgetWeightsTensorCopy;
248  inputParams.m_RecurrentToForgetWeights = &recurrentToForgetWeightsTensor;
249  }
250 
251  ConstTensor recurrentToCellWeightsTensor;
253  {
254  ConstTensor recurrentToCellWeightsTensorCopy(
255  managedRecurrentToCellWeights.GetTensorInfo(),
256  managedRecurrentToCellWeights.Map());
257  recurrentToCellWeightsTensor = recurrentToCellWeightsTensorCopy;
258  inputParams.m_RecurrentToCellWeights = &recurrentToCellWeightsTensor;
259  }
260 
261  ConstTensor recurrentToOutputWeightsTensor;
263  {
264  ConstTensor recurrentToOutputWeightsTensorCopy(
265  managedRecurrentToOutputWeights.GetTensorInfo(),
266  managedRecurrentToOutputWeights.Map());
267  recurrentToOutputWeightsTensor = recurrentToOutputWeightsTensorCopy;
268  inputParams.m_RecurrentToOutputWeights = &recurrentToOutputWeightsTensor;
269  }
270 
271  // Bias tensors
272  ConstTensor inputGateBiasTensor;
274  {
275  ConstTensor inputGateBiasTensorCopy(managedInputGateBias.GetTensorInfo(),
276  managedInputGateBias.Map());
277  inputGateBiasTensor = inputGateBiasTensorCopy;
278  inputParams.m_InputGateBias = &inputGateBiasTensor;
279  }
280 
281  ConstTensor forgetGateBiasTensor;
283  {
284  ConstTensor forgetGateBiasTensorCopy(managedForgetGateBias.GetTensorInfo(),
285  managedForgetGateBias.Map());
286  forgetGateBiasTensor = forgetGateBiasTensorCopy;
287  inputParams.m_ForgetGateBias = &forgetGateBiasTensor;
288  }
289 
290  ConstTensor cellBiasTensor;
291  if (m_QuantizedLstmParameters.m_CellBias != nullptr)
292  {
293  ConstTensor cellBiasTensorCopy(managedCellBias.GetTensorInfo(),
294  managedCellBias.Map());
295  cellBiasTensor = cellBiasTensorCopy;
296  inputParams.m_CellBias = &cellBiasTensor;
297  }
298 
299  ConstTensor outputGateBiasTensor;
301  {
302  ConstTensor outputGateBiasCopy(managedOutputGateBias.GetTensorInfo(),
303  managedOutputGateBias.Map());
304  outputGateBiasTensor = outputGateBiasCopy;
305  inputParams.m_OutputGateBias = &outputGateBiasTensor;
306  }
307 
308  visitor.VisitQuantizedLstmLayer(this, inputParams, GetName());
309 }
std::shared_ptr< ConstTensorHandle > m_ForgetGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
QuantizedLstmParameters m_QuantizedLstmParameters
std::shared_ptr< ConstTensorHandle > m_InputToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_InputToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_CellBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_RecurrentToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_OutputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
const char * GetName() const override
Returns the name of the layer.
Definition: Layer.hpp:316
std::shared_ptr< ConstTensorHandle > m_RecurrentToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...

◆ Clone()

QuantizedLstmLayer * Clone ( Graph graph) const
overridevirtual

Creates a dynamically-allocated copy of this layer.

Parameters
[in]graphThe graph into which this layer is being cloned.

Implements Layer.

Definition at line 47 of file QuantizedLstmLayer.cpp.

References Layer::GetName(), QuantizedLstmParameters::m_CellBias, QuantizedLstmParameters::m_ForgetGateBias, QuantizedLstmParameters::m_InputGateBias, QuantizedLstmParameters::m_InputToCellWeights, QuantizedLstmParameters::m_InputToForgetWeights, QuantizedLstmParameters::m_InputToInputWeights, QuantizedLstmParameters::m_InputToOutputWeights, QuantizedLstmParameters::m_OutputGateBias, QuantizedLstmLayer::m_QuantizedLstmParameters, QuantizedLstmParameters::m_RecurrentToCellWeights, QuantizedLstmParameters::m_RecurrentToForgetWeights, QuantizedLstmParameters::m_RecurrentToInputWeights, and QuantizedLstmParameters::m_RecurrentToOutputWeights.

48 {
49  auto layer = CloneBase<QuantizedLstmLayer>(graph, GetName());
50 
51  layer->m_QuantizedLstmParameters.m_InputToInputWeights = m_QuantizedLstmParameters.m_InputToInputWeights ?
53  layer->m_QuantizedLstmParameters.m_InputToForgetWeights = m_QuantizedLstmParameters.m_InputToForgetWeights ?
55  layer->m_QuantizedLstmParameters.m_InputToCellWeights = m_QuantizedLstmParameters.m_InputToCellWeights ?
57  layer->m_QuantizedLstmParameters.m_InputToOutputWeights = m_QuantizedLstmParameters.m_InputToOutputWeights ?
59 
60  layer->m_QuantizedLstmParameters.m_RecurrentToInputWeights = m_QuantizedLstmParameters.m_RecurrentToInputWeights ?
62  layer->m_QuantizedLstmParameters.m_RecurrentToForgetWeights = m_QuantizedLstmParameters.m_RecurrentToForgetWeights
64  layer->m_QuantizedLstmParameters.m_RecurrentToCellWeights = m_QuantizedLstmParameters.m_RecurrentToCellWeights ?
66  layer->m_QuantizedLstmParameters.m_RecurrentToOutputWeights = m_QuantizedLstmParameters.m_RecurrentToOutputWeights
68 
69  layer->m_QuantizedLstmParameters.m_InputGateBias = m_QuantizedLstmParameters.m_InputGateBias ?
71  layer->m_QuantizedLstmParameters.m_ForgetGateBias = m_QuantizedLstmParameters.m_ForgetGateBias ?
73  layer->m_QuantizedLstmParameters.m_CellBias = m_QuantizedLstmParameters.m_CellBias ?
75  layer->m_QuantizedLstmParameters.m_OutputGateBias = m_QuantizedLstmParameters.m_OutputGateBias ?
77 
78  return std::move(layer);
79 }
std::shared_ptr< ConstTensorHandle > m_ForgetGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
QuantizedLstmParameters m_QuantizedLstmParameters
std::shared_ptr< ConstTensorHandle > m_InputToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_InputToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_CellBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_RecurrentToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_OutputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
const char * GetName() const override
Returns the name of the layer.
Definition: Layer.hpp:316
std::shared_ptr< ConstTensorHandle > m_RecurrentToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...

◆ CreateWorkload()

std::unique_ptr< IWorkload > CreateWorkload ( const IWorkloadFactory factory) const
overridevirtual

Makes a workload for the QuantizedLstm type.

Parameters
[in]graphThe graph where this layer can be found.
[in]factoryThe workload factory which will create the workload.
Returns
A pointer to the created workload, or nullptr if not created.

Implements Layer.

Definition at line 22 of file QuantizedLstmLayer.cpp.

References IWorkloadFactory::CreateWorkload(), QuantizedLstmParameters::m_CellBias, QuantizedLstmQueueDescriptor::m_CellBias, QuantizedLstmParameters::m_ForgetGateBias, QuantizedLstmQueueDescriptor::m_ForgetGateBias, QuantizedLstmParameters::m_InputGateBias, QuantizedLstmQueueDescriptor::m_InputGateBias, QuantizedLstmParameters::m_InputToCellWeights, QuantizedLstmQueueDescriptor::m_InputToCellWeights, QuantizedLstmParameters::m_InputToForgetWeights, QuantizedLstmQueueDescriptor::m_InputToForgetWeights, QuantizedLstmParameters::m_InputToInputWeights, QuantizedLstmQueueDescriptor::m_InputToInputWeights, QuantizedLstmParameters::m_InputToOutputWeights, QuantizedLstmQueueDescriptor::m_InputToOutputWeights, QuantizedLstmParameters::m_OutputGateBias, QuantizedLstmQueueDescriptor::m_OutputGateBias, QuantizedLstmLayer::m_QuantizedLstmParameters, QuantizedLstmParameters::m_RecurrentToCellWeights, QuantizedLstmQueueDescriptor::m_RecurrentToCellWeights, QuantizedLstmParameters::m_RecurrentToForgetWeights, QuantizedLstmQueueDescriptor::m_RecurrentToForgetWeights, QuantizedLstmParameters::m_RecurrentToInputWeights, QuantizedLstmQueueDescriptor::m_RecurrentToInputWeights, QuantizedLstmParameters::m_RecurrentToOutputWeights, QuantizedLstmQueueDescriptor::m_RecurrentToOutputWeights, Layer::PrepInfoAndDesc(), armnn::QuantizedLstm, and Layer::SetAdditionalInfo().

23 {
24  QuantizedLstmQueueDescriptor descriptor;
25 
26  // QuantizedLstmLayer parameters - there are no optional params
27  descriptor.m_InputToInputWeights = m_QuantizedLstmParameters.m_InputToInputWeights.get();
28  descriptor.m_InputToForgetWeights = m_QuantizedLstmParameters.m_InputToForgetWeights.get();
29  descriptor.m_InputToCellWeights = m_QuantizedLstmParameters.m_InputToCellWeights.get();
30  descriptor.m_InputToOutputWeights = m_QuantizedLstmParameters.m_InputToOutputWeights.get();
31 
32  descriptor.m_RecurrentToInputWeights = m_QuantizedLstmParameters.m_RecurrentToInputWeights.get();
33  descriptor.m_RecurrentToForgetWeights = m_QuantizedLstmParameters.m_RecurrentToForgetWeights.get();
34  descriptor.m_RecurrentToCellWeights = m_QuantizedLstmParameters.m_RecurrentToCellWeights.get();
35  descriptor.m_RecurrentToOutputWeights = m_QuantizedLstmParameters.m_RecurrentToOutputWeights.get();
36 
37  descriptor.m_InputGateBias = m_QuantizedLstmParameters.m_InputGateBias.get();
38  descriptor.m_ForgetGateBias = m_QuantizedLstmParameters.m_ForgetGateBias.get();
39  descriptor.m_CellBias = m_QuantizedLstmParameters.m_CellBias.get();
40  descriptor.m_OutputGateBias = m_QuantizedLstmParameters.m_OutputGateBias.get();
41 
42  SetAdditionalInfo(descriptor);
43 
44  return factory.CreateWorkload(LayerType::QuantizedLstm, descriptor, PrepInfoAndDesc(descriptor));
45 }
std::shared_ptr< ConstTensorHandle > m_ForgetGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
QuantizedLstmParameters m_QuantizedLstmParameters
std::shared_ptr< ConstTensorHandle > m_InputToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_InputToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_CellBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_RecurrentToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
WorkloadInfo PrepInfoAndDesc(QueueDescriptor &descriptor) const
Helper function to reduce duplication in *LayerCreateWorkload.
Definition: Layer.hpp:388
std::shared_ptr< ConstTensorHandle > m_OutputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
void SetAdditionalInfo(QueueDescriptor &descriptor) const
Definition: Layer.cpp:248
std::shared_ptr< ConstTensorHandle > m_RecurrentToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_RecurrentToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...

◆ ExecuteStrategy()

ARMNN_NO_DEPRECATE_WARN_END void ExecuteStrategy ( IStrategy strategy) const
overridevirtual

Apply a visitor to this layer.

Reimplemented from Layer.

Definition at line 312 of file QuantizedLstmLayer.cpp.

References IStrategy::ExecuteStrategy(), Layer::GetName(), ManagedConstTensorHandle::GetTensorInfo(), QuantizedLstmParameters::m_CellBias, QuantizedLstmParameters::m_ForgetGateBias, QuantizedLstmParameters::m_InputGateBias, QuantizedLstmParameters::m_InputToCellWeights, QuantizedLstmParameters::m_InputToForgetWeights, QuantizedLstmParameters::m_InputToInputWeights, QuantizedLstmParameters::m_InputToOutputWeights, QuantizedLstmParameters::m_OutputGateBias, QuantizedLstmLayer::m_QuantizedLstmParameters, QuantizedLstmParameters::m_RecurrentToCellWeights, QuantizedLstmParameters::m_RecurrentToForgetWeights, QuantizedLstmParameters::m_RecurrentToInputWeights, QuantizedLstmParameters::m_RecurrentToOutputWeights, and ManagedConstTensorHandle::Map().

313 {
314  std::vector<ConstTensor> constTensors;
315 
316  ManagedConstTensorHandle managedInputToInputWeights(m_QuantizedLstmParameters.m_InputToInputWeights);
317  ManagedConstTensorHandle managedInputToForgetWeights(m_QuantizedLstmParameters.m_InputToForgetWeights);
318  ManagedConstTensorHandle managedInputToCellWeights(m_QuantizedLstmParameters.m_InputToCellWeights);
319  ManagedConstTensorHandle managedInputToOutputWeights(m_QuantizedLstmParameters.m_InputToOutputWeights);
320 
321  ManagedConstTensorHandle managedRecurrentToInputWeights(m_QuantizedLstmParameters.m_RecurrentToInputWeights);
322  ManagedConstTensorHandle managedRecurrentToForgetWeights(m_QuantizedLstmParameters.m_RecurrentToForgetWeights);
323  ManagedConstTensorHandle managedRecurrentToCellWeights(m_QuantizedLstmParameters.m_RecurrentToCellWeights);
324  ManagedConstTensorHandle managedRecurrentToOutputWeights(m_QuantizedLstmParameters.m_RecurrentToOutputWeights);
325 
326  ManagedConstTensorHandle managedInputGateBias(m_QuantizedLstmParameters.m_InputGateBias);
327  ManagedConstTensorHandle managedForgetGateBias(m_QuantizedLstmParameters.m_ForgetGateBias);
328  ManagedConstTensorHandle managedCellBias(m_QuantizedLstmParameters.m_CellBias);
329  ManagedConstTensorHandle managedOutputGateBias(m_QuantizedLstmParameters.m_OutputGateBias);
330 
331  // InputToX weight tensors
333  {
334  constTensors.emplace_back(ConstTensor(managedInputToInputWeights.GetTensorInfo(),
335  managedInputToInputWeights.Map()));
336  }
337 
339  {
340  constTensors.emplace_back(ConstTensor(managedInputToForgetWeights.GetTensorInfo(),
341  managedInputToForgetWeights.Map()));
342  }
343 
345  {
346  constTensors.emplace_back(ConstTensor(managedInputToCellWeights.GetTensorInfo(),
347  managedInputToCellWeights.Map()));
348  }
349 
351  {
352  constTensors.emplace_back(ConstTensor(managedInputToOutputWeights.GetTensorInfo(),
353  managedInputToOutputWeights.Map()));
354  }
355 
356  // RecurrentToX weight tensors
358  {
359  constTensors.emplace_back(ConstTensor(
360  managedRecurrentToInputWeights.GetTensorInfo(),
361  managedRecurrentToInputWeights.Map()));
362  }
363 
365  {
366  constTensors.emplace_back(ConstTensor(
367  managedRecurrentToForgetWeights.GetTensorInfo(),
368  managedRecurrentToForgetWeights.Map()));
369  }
370 
372  {
373  constTensors.emplace_back(ConstTensor(
374  managedRecurrentToCellWeights.GetTensorInfo(),
375  managedRecurrentToCellWeights.Map()));
376  }
377 
379  {
380  constTensors.emplace_back(ConstTensor(
381  managedRecurrentToOutputWeights.GetTensorInfo(),
382  managedRecurrentToOutputWeights.Map()));
383  }
384 
385  // Bias tensors
387  {
388  constTensors.emplace_back(ConstTensor(managedInputGateBias.GetTensorInfo(),
389  managedInputGateBias.Map()));
390  }
391 
393  {
394  constTensors.emplace_back(ConstTensor(managedForgetGateBias.GetTensorInfo(),
395  managedForgetGateBias.Map()));
396  }
397 
398  if (m_QuantizedLstmParameters.m_CellBias != nullptr)
399  {
400  constTensors.emplace_back(ConstTensor(managedCellBias.GetTensorInfo(),
401  managedCellBias.Map()));
402  }
403 
405  {
406  constTensors.emplace_back(ConstTensor(managedOutputGateBias.GetTensorInfo(),
407  managedOutputGateBias.Map()));
408  }
409 
410 
411  strategy.ExecuteStrategy(this, BaseDescriptor(), constTensors, GetName());
412 }
std::shared_ptr< ConstTensorHandle > m_ForgetGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
QuantizedLstmParameters m_QuantizedLstmParameters
std::shared_ptr< ConstTensorHandle > m_InputToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_InputToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_CellBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_RecurrentToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_OutputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
const char * GetName() const override
Returns the name of the layer.
Definition: Layer.hpp:316
std::shared_ptr< ConstTensorHandle > m_RecurrentToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...

◆ GetConstantTensorsByRef()

Layer::ConstantTensors GetConstantTensorsByRef ( )
overrideprotectedvirtual

Retrieve the handles to the constant values stored by the layer.

Returns
A vector of the constant tensors stored by this layer.

Reimplemented from Layer.

Definition at line 151 of file QuantizedLstmLayer.cpp.

References ARMNN_NO_DEPRECATE_WARN_BEGIN, QuantizedLstmParameters::m_CellBias, QuantizedLstmParameters::m_ForgetGateBias, QuantizedLstmParameters::m_InputGateBias, QuantizedLstmParameters::m_InputToCellWeights, QuantizedLstmParameters::m_InputToForgetWeights, QuantizedLstmParameters::m_InputToInputWeights, QuantizedLstmParameters::m_InputToOutputWeights, QuantizedLstmParameters::m_OutputGateBias, QuantizedLstmLayer::m_QuantizedLstmParameters, QuantizedLstmParameters::m_RecurrentToCellWeights, QuantizedLstmParameters::m_RecurrentToForgetWeights, QuantizedLstmParameters::m_RecurrentToInputWeights, and QuantizedLstmParameters::m_RecurrentToOutputWeights.

152 {
153  // For API stability DO NOT ALTER order and add new members to the end of vector
154  return
155  {
160 
165 
170  };
171 }
std::shared_ptr< ConstTensorHandle > m_ForgetGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
QuantizedLstmParameters m_QuantizedLstmParameters
std::shared_ptr< ConstTensorHandle > m_InputToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_InputToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_CellBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_RecurrentToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_OutputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_RecurrentToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...

◆ InferOutputShapes()

std::vector< TensorShape > InferOutputShapes ( const std::vector< TensorShape > &  inputShapes) const
overridevirtual

By default returns inputShapes if the number of inputs are equal to number of outputs, otherwise infers the output shapes from given input shapes and layer properties.

Parameters
[in]inputShapesThe input shapes layer has.
Returns
A vector to the inferred output shape.

Reimplemented from Layer.

Definition at line 81 of file QuantizedLstmLayer.cpp.

References ARMNN_ASSERT.

Referenced by QuantizedLstmInferOutputShapeImpl(), and QuantizedLstmLayer::ValidateTensorShapesFromInputs().

82 {
83  ARMNN_ASSERT(inputShapes.size() == 3);
84 
85  // Get input values for validation
86  unsigned int numBatches = inputShapes[0][0];
87  unsigned int outputSize = inputShapes[1][1];
88 
89  std::vector<TensorShape> outShapes;
90  outShapes.push_back(TensorShape({numBatches, outputSize})); // cellStateOut
91  outShapes.push_back(TensorShape({numBatches, outputSize})); // output
92 
93  return outShapes;
94 }
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14

◆ ValidateTensorShapesFromInputs()

void ValidateTensorShapesFromInputs ( )
overridevirtual

Check if the input tensor shape(s) will lead to a valid configuration of QuantizedLstmLayer.

Parameters
[in]shapeInferenceMethodIndicates if output shape shall be overwritten or just validated.

Implements Layer.

Definition at line 96 of file QuantizedLstmLayer.cpp.

References ARMNN_ASSERT, ARMNN_ASSERT_MSG, CHECK_LOCATION, InputSlot::GetConnection(), Layer::GetInputSlot(), Layer::GetOutputSlot(), TensorInfo::GetShape(), armnn::GetTensorInfo(), IOutputSlot::GetTensorInfo(), OutputSlot::GetTensorInfo(), QuantizedLstmLayer::InferOutputShapes(), QuantizedLstmParameters::m_CellBias, QuantizedLstmParameters::m_ForgetGateBias, QuantizedLstmParameters::m_InputGateBias, QuantizedLstmParameters::m_InputToCellWeights, QuantizedLstmParameters::m_InputToForgetWeights, QuantizedLstmParameters::m_InputToInputWeights, QuantizedLstmParameters::m_InputToOutputWeights, QuantizedLstmParameters::m_OutputGateBias, QuantizedLstmLayer::m_QuantizedLstmParameters, QuantizedLstmParameters::m_RecurrentToCellWeights, QuantizedLstmParameters::m_RecurrentToForgetWeights, QuantizedLstmParameters::m_RecurrentToInputWeights, QuantizedLstmParameters::m_RecurrentToOutputWeights, Layer::m_ShapeInferenceMethod, Layer::ValidateAndCopyShape(), Layer::VerifyLayerConnections(), and Layer::VerifyShapeInferenceType().

97 {
99 
100  const TensorShape& outputShape = GetOutputSlot(0).GetTensorInfo().GetShape();
101 
103 
104  auto inferredShapes = InferOutputShapes(
105  {
107  GetInputSlot(1).GetConnection()->GetTensorInfo().GetShape(), // previousCellStateIn
108  GetInputSlot(2).GetConnection()->GetTensorInfo().GetShape() // previousOutputIn
109  });
110 
111  ARMNN_ASSERT(inferredShapes.size() == 2);
112 
113  // Check weights and bias for nullptr
115  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_InputToInputWeights should not be null.");
117  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_InputToForgetWeights should not be null.");
119  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_InputToCellWeights should not be null.");
121  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_InputToOutputWeights should not be null.");
122 
124  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_RecurrentToInputWeights should not be null.");
126  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_RecurrentToForgetWeights should not be null.");
128  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_RecurrentToCellWeights should not be null.");
130  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_RecurrentToOutputWeights should not be null.");
131 
133  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_InputGateBias should not be null.");
135  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_ForgetGateBias should not be null.");
137  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_CellBias should not be null.");
139  "QuantizedLstmLayer: m_QuantizedLstmParameters.m_OutputGateBias should not be null.");
140 
141  // Check output TensorShape(s) match inferred shape
142  ValidateAndCopyShape(outputShape, inferredShapes[0], m_ShapeInferenceMethod, "QuantizedLstmLayer");
143 
145  inferredShapes[1],
147  "QuantizedLstmLayer",
148  1);
149 }
std::shared_ptr< ConstTensorHandle > m_ForgetGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
QuantizedLstmParameters m_QuantizedLstmParameters
const TensorShape & GetShape() const
Definition: Tensor.hpp:191
std::shared_ptr< ConstTensorHandle > m_InputToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
void VerifyShapeInferenceType(const TensorShape &outputShape, ShapeInferenceMethod shapeInferenceMethod)
Definition: Layer.cpp:436
std::shared_ptr< ConstTensorHandle > m_InputToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
const IOutputSlot * GetConnection() const override
Definition: Layer.hpp:204
std::shared_ptr< ConstTensorHandle > m_CellBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
void ValidateAndCopyShape(const TensorShape &outputShape, const TensorShape &inferredShape, const ShapeInferenceMethod shapeInferenceMethod, const std::string &layerName, const unsigned int outputSlotIndex=0)
Definition: Layer.cpp:396
std::shared_ptr< ConstTensorHandle > m_RecurrentToOutputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_RecurrentToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
void VerifyLayerConnections(unsigned int expectedConnections, const CheckLocation &location) const
Definition: Layer.cpp:352
std::shared_ptr< ConstTensorHandle > m_InputToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
const InputSlot & GetInputSlot(unsigned int index) const override
Get a const input slot handle by slot index.
Definition: Layer.hpp:321
#define ARMNN_ASSERT_MSG(COND, MSG)
Definition: Assert.hpp:15
#define ARMNN_ASSERT(COND)
Definition: Assert.hpp:14
#define CHECK_LOCATION()
Definition: Exceptions.hpp:209
std::shared_ptr< ConstTensorHandle > m_OutputGateBias
A unique pointer to represent 1D bias tensor with dimensions [outputSize] (int32).
std::shared_ptr< ConstTensorHandle > m_RecurrentToCellWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
const OutputSlot & GetOutputSlot(unsigned int index=0) const override
Get the const output slot handle by slot index.
Definition: Layer.hpp:323
virtual const TensorInfo & GetTensorInfo() const =0
const TensorInfo & GetTensorInfo(const ITensorHandle *tensorHandle)
float32 helpers
std::shared_ptr< ConstTensorHandle > m_RecurrentToForgetWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, outputSize] (QAsymm8)...
std::shared_ptr< ConstTensorHandle > m_InputToInputWeights
A unique pointer to represent 2D weights tensor with dimensions [outputSize, inputSize] (QAsymm8)...
const TensorInfo & GetTensorInfo() const override
Definition: Layer.cpp:66
std::vector< TensorShape > InferOutputShapes(const std::vector< TensorShape > &inputShapes) const override
By default returns inputShapes if the number of inputs are equal to number of outputs, otherwise infers the output shapes from given input shapes and layer properties.
ShapeInferenceMethod m_ShapeInferenceMethod
Definition: Layer.hpp:415

Member Data Documentation

◆ m_QuantizedLstmParameters


The documentation for this class was generated from the following files: