22.05
|
Copyright (c) 2021 ARM Limited and Contributors. More...
Namespaces | |
experimental | |
gatordmock | |
optimizations | |
profiling | |
stringUtils | |
test | |
timelinedecoder | |
utility | |
Functions | |
LayerSupportHandle | GetILayerSupportByBackendId (const armnn::BackendId &backend) |
Convenience function to retrieve the ILayerSupportHandle for a backend. More... | |
bool | HasCapability (const std::string &name, const BackendCapabilities &capabilities) |
Convenience function to check if a capability exists in a BackendCapabilites struct. More... | |
bool | HasCapability (const std::string &name, const armnn::BackendId &backend) |
Convenience function to check if a capability exists in a backend. More... | |
bool | HasCapability (const BackendOptions::BackendOption &capability, const BackendCapabilities &capabilities) |
Convenience function to check if a given capability matches a capability in a BackendCapabilities struct. More... | |
bool | HasCapability (const BackendOptions::BackendOption &backendOption, const armnn::BackendId &backend) |
Convenience function to check if a given capability matches a capability in a backend. More... | |
Optional< const BackendOptions::BackendOption > | GetCapability (const std::string &backendCapabilityName, const BackendCapabilities &capabilities) |
Returns a BackendCapability if the backend lists the capability The BackendCapability must then be inspected to check whether or not that BackendCapability is supported Otherwise returns an EmptyOptional if the BackendCapability is unlisted. More... | |
Optional< const BackendOptions::BackendOption > | GetCapability (const std::string &backendCapabilityName, const armnn::BackendId &backend) |
Returns a BackendCapability if the backend lists the capability The BackendCapability must then be inspected to check whether or not that BackendCapability is supported Otherwise returns an EmptyOptional if the BackendCapability is unlisted. More... | |
bool | IsCapabilitySupported (const armnn::BackendId &backend, armnn::BackendCapability capability) |
Convenience function to check a capability on a backend. More... | |
unsigned int | GetNumberOfCacheFiles (const armnn::BackendId &backend) |
Returns the number of cached files if backend supports caching. More... | |
constexpr char const * | GetComputeDeviceAsCString (Compute compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const std::vector< Compute > &compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const std::set< Compute > &compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const Compute &compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const BackendId &id) |
template<template< typename... > class TContainer, typename... TContainerTemplateArgs> | |
std::ostream & | operator<< (std::ostream &os, const TContainer< BackendId, TContainerTemplateArgs... > &ids) |
template<typename F > | |
void | ParseOptions (const std::vector< BackendOptions > &options, BackendId backend, F f) |
bool | ParseBooleanBackendOption (const armnn::BackendOptions::Var &value, bool defaultValue) |
std::string | ParseStringBackendOption (const armnn::BackendOptions::Var &value, std::string defaultValue) |
int | ParseIntBackendOption (const armnn::BackendOptions::Var &value, int defaultValue) |
BackendRegistry & | BackendRegistryInstance () |
std::ostream & | operator<< (std::ostream &os, const BackendVersion &backendVersion) |
TensorShape | GetUnpaddedTensorStrides (const TensorInfo &tensorInfo) |
DataType | GetBiasDataType (DataType inputDataType) |
ARMNN_NO_DEPRECATE_WARN_BEGIN struct | ARMNN_DEPRECATED_MSG_REMOVAL_DATE ("ResizeBilinearQueueDescriptor is deprecated use ResizeQueueDescriptor instead", "22.08") ResizeBilinearQueueDescriptor |
template<typename TensorShapeIt > | |
OriginsDescriptor | CreateDescriptorForConcatenation (TensorShapeIt first, TensorShapeIt last, unsigned int concatenationDimension) |
Convenience template to create an OriginsDescriptor to use when creating a ConcatLayer for performing concatenation of a number of input tensors. More... | |
template<typename ExceptionType > | |
void | ConditionalThrow (bool condition, const std::string &message) |
template<typename ExceptionType > | |
void | ConditionalThrow (bool condition) |
template<typename ExceptionType , typename ComparedType > | |
void | ConditionalThrowIfNotEqual (const std::string &message, const ComparedType &leftHandSide, const ComparedType &rightHandSide) |
ComparedType must support: operator==(const ComparedType&) operator<<(ostream&, const ComparedType&) More... | |
class | ARMNN_DEPRECATED_MSG_REMOVAL_DATE ("Use ABI stable IStrategy instead.", "22.05") ILayerVisitor |
IOptimizedNetworkPtr | Optimize (const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional()) |
Create an optimized version of the network. More... | |
IOptimizedNetworkPtr | Optimize (const Graph &inGraph, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options, Optional< std::vector< std::string > &> messages=EmptyOptional()) |
Create an optimized version of the network. More... | |
bool | IsActivationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ActivationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsAdditionSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsBatchNormalizationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const TensorInfo &mean, const TensorInfo &var, const TensorInfo &beta, const TensorInfo &gamma, const BatchNormalizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsBatchToSpaceNdSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const BatchToSpaceNdDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConcatSupported (const BackendId &backend, const std::vector< const TensorInfo *> inputs, const TensorInfo &output, const OriginsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConstantSupported (const BackendId &backend, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConvertFp16ToFp32Supported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConvertFp32ToFp16Supported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConvolution2dSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const Convolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDebugSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDepthwiseConvolutionSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const DepthwiseConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDequantizeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDivisionSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsEqualSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsFakeQuantizationSupported (const BackendId &backend, const TensorInfo &input, const FakeQuantizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsFloorSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsFullyConnectedSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const TensorInfo &weights, const TensorInfo &biases, const FullyConnectedDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsGreaterSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsInputSupported (const BackendId &backend, const TensorInfo &input, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsL2NormalizationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const L2NormalizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsLstmSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &scratchBuffer, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const LstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMaximumSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnSupported=nullptr, size_t reasonIfUnSupportedMaxLength=0) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMeanSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const MeanDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMemCopySupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMergeSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMinimumSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMultiplicationSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsNormalizationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const NormalizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsOutputSupported (const BackendId &backend, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPadSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const PadDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPermuteSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const PermuteDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPreCompiledSupported (const BackendId &backend, const TensorInfo &input, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPreluSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &alpha, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPooling2dSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const Pooling2dDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsQuantizedLstmSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &previousCellStateIn, const TensorInfo &previousOutputIn, const TensorInfo &cellStateOut, const TensorInfo &output, const QuantizedLstmInputParamsInfo ¶msInfo, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsReduceSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ReduceDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsReshapeSupported (const BackendId &backend, const TensorInfo &input, const ReshapeDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsResizeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ResizeDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsRsqrtSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSoftmaxSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const SoftmaxDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSpaceToBatchNdSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const SpaceToBatchNdDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSpaceToDepthSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const SpaceToDepthDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSplitterSupported (const BackendId &backend, const TensorInfo &input, const std::vector< std::reference_wrapper< TensorInfo >> &outputs, const ViewsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsStackSupported (const BackendId &backend, const std::vector< const TensorInfo *> inputs, const TensorInfo &output, const StackDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsStridedSliceSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const StridedSliceDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSubtractionSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSwitchSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output0, const TensorInfo &output1, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsTransposeConvolution2dSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const TransposeConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
std::string | LevelToString (LogSeverity level) |
LogSeverity | StringToLogLevel (std::string level) |
void | SetLogFilter (LogSeverity level) |
void | SetAllLoggingSinks (bool standardOut, bool debugOut, bool coloured) |
constexpr LogSeverity | ConvertLogSeverity (BoostLogSeverityMapping severity) |
template<typename Arg , typename std::enable_if< IsMemorySource< Arg >::value >::type * = nullptr> | |
MemorySourceFlags | Combine (Arg sourceA, Arg sourceB) |
template<typename Arg , typename ... Args, typename std::enable_if< IsMemorySource< Arg >::value >::type * = nullptr> | |
MemorySourceFlags | Combine (Arg source, Args... rest) |
bool | CheckFlag (MemorySourceFlags flags, MemorySource source) |
template<typename T , class... Args> | |
Optional< T > | MakeOptional (Args &&... args) |
Utility template that constructs an object of type T in-place and wraps it inside an Optional<T> object. More... | |
const char * | GetLayerTypeAsCString (LayerType type) |
constexpr char const * | GetStatusAsCString (Status status) |
constexpr char const * | GetActivationFunctionAsCString (ActivationFunction activation) |
constexpr char const * | GetArgMinMaxFunctionAsCString (ArgMinMaxFunction function) |
constexpr char const * | GetComparisonOperationAsCString (ComparisonOperation operation) |
constexpr char const * | GetUnaryOperationAsCString (UnaryOperation operation) |
constexpr char const * | GetLogicalBinaryOperationAsCString (LogicalBinaryOperation operation) |
constexpr char const * | GetPoolingAlgorithmAsCString (PoolingAlgorithm pooling) |
constexpr char const * | GetOutputShapeRoundingAsCString (OutputShapeRounding rounding) |
constexpr char const * | GetPaddingMethodAsCString (PaddingMethod method) |
constexpr char const * | GetPaddingModeAsCString (PaddingMode mode) |
constexpr char const * | GetReduceOperationAsCString (ReduceOperation reduce_operation) |
constexpr unsigned int | GetDataTypeSize (DataType dataType) |
template<unsigned N> | |
constexpr bool | StrEqual (const char *strA, const char(&strB)[N]) |
constexpr armnn::Compute | ParseComputeDevice (const char *str) |
Deprecated function that will be removed together with the Compute enum. More... | |
constexpr const char * | GetDataTypeName (DataType dataType) |
constexpr const char * | GetDataLayoutName (DataLayout dataLayout) |
constexpr const char * | GetNormalizationAlgorithmChannelAsCString (NormalizationAlgorithmChannel channel) |
constexpr const char * | GetNormalizationAlgorithmMethodAsCString (NormalizationAlgorithmMethod method) |
constexpr const char * | GetResizeMethodAsCString (ResizeMethod method) |
constexpr const char * | GetMemBlockStrategyTypeName (MemBlockStrategyType memBlockStrategyType) |
template<typename T > | |
constexpr bool | IsQuantizedType () |
constexpr bool | IsQuantized8BitType (DataType dataType) |
constexpr bool | IsQuantizedType (DataType dataType) |
std::ostream & | operator<< (std::ostream &os, Status stat) |
std::ostream & | operator<< (std::ostream &os, const armnn::TensorShape &shape) |
template<typename QuantizedType > | |
QuantizedType | Quantize (float value, float scale, int32_t offset) |
Quantize a floating point data type into an 8-bit data type. More... | |
template<typename QuantizedType > | |
float | Dequantize (QuantizedType value, float scale, int32_t offset) |
Dequantize an 8-bit data type into a floating point data type. More... | |
void | VerifyTensorInfoDataType (const armnn::TensorInfo &info, armnn::DataType dataType) |
template<typename ... Ts> | |
void | IgnoreUnused (Ts &&...) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_signed< Source >::value &&std::is_integral< Source >::value &&std::is_signed< Dest >::value &&std::is_integral< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_floating_point< Source >::value &&std::is_floating_point< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_floating_point< Source >::value &&std::is_signed< Dest >::value &&std::is_integral< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_signed< Source >::value &&std::is_integral< Source >::value &&std::is_floating_point< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_signed< Dest >::value &&std::is_integral< Dest >::value &&std::is_unsigned< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_floating_point< Dest >::value &&std::is_unsigned< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_unsigned< Dest >::value &&std::is_signed< Source >::value &&std::is_integral< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_unsigned< Dest >::value &&std::is_floating_point< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename DestType , typename SourceType > | |
DestType | PolymorphicDowncast (SourceType *value) |
Polymorphic downcast for build in pointers only. More... | |
template<typename DestType , typename SourceType > | |
auto | PolymorphicPointerDowncast (const SourceType &value) |
Polymorphic downcast for shared pointers and build in pointers. More... | |
std::chrono::high_resolution_clock::time_point | GetTimeNow () |
std::chrono::duration< double, std::milli > | GetTimeDuration (std::chrono::high_resolution_clock::time_point start_time) |
template<typename Function , typename Iterator > | |
constexpr TransformIterator< Function, Iterator > | MakeTransformIterator (Iterator i, Function f) |
void | ConfigureLogging (bool printToStandardOutput, bool printToDebugOutput, LogSeverity severity) |
Configures the logging behaviour of the ARMNN library. More... | |
bool | NeonDetected () |
const std::string | GetVersion () |
void | swap (OriginsDescriptor &first, OriginsDescriptor &second) |
void | swap (ViewsDescriptor &first, ViewsDescriptor &second) |
uint32_t | GetNumInputs (bool biasEnabled) |
void | AssertNumberOfInputSlots (Layer &layer) |
template<typename T > | |
constexpr LayerType | LayerEnumOf (const T *=nullptr) |
template<> | |
constexpr LayerType | LayerEnumOf (const ActivationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const AdditionLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ArgMinMaxLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const BatchNormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const BatchToSpaceNdLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const CastLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ChannelShuffleLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ComparisonLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConcatLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConstantLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertBf16ToFp32Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertFp16ToFp32Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertFp32ToBf16Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertFp32ToFp16Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const Convolution2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const Convolution3dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DebugLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DepthToSpaceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DepthwiseConvolution2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DequantizeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DetectionPostProcessLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DivisionLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ElementwiseUnaryLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FakeQuantizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FillLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FloorLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FullyConnectedLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const GatherLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const GatherNdLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const InputLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const InstanceNormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const L2NormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const LogicalBinaryLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const LogSoftmaxLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const LstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MapLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MaximumLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MeanLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MemCopyLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MemImportLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MergeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MinimumLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MultiplicationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const NormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const OutputLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PadLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PermuteLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const Pooling2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const Pooling3dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PreCompiledLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PreluLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const QuantizeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const QLstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const QuantizedLstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const RankLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ReduceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ReshapeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ResizeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ShapeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SliceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SoftmaxLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SpaceToBatchNdLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SpaceToDepthLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SplitterLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const StackLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const StandInLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const StridedSliceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SubtractionLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SwitchLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const TransposeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const TransposeConvolution2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const UnidirectionalSequenceLstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const UnmapLayer *) |
template<typename T , typename V > | |
void | SetValueChecked (Optional< T &> optionalRef, V &&val) |
template<typename Float16Func , typename Float32Func , typename Uint8Func , typename Int32Func , typename BooleanFunc , typename ... Params> | |
bool | IsSupportedForDataTypeGeneric (Optional< std::string &> reasonIfUnsupported, DataType dataType, Float16Func float16FuncPtr, Float32Func float32FuncPtr, Uint8Func uint8FuncPtr, Int32Func int32FuncPtr, BooleanFunc booleanFuncPtr, Params &&... params) |
template<typename ... Params> | |
bool | TrueFunc (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFunc (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncF16 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncF32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncU8 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncI32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseInputFuncF32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseInputFuncF16 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseOutputFuncF32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseOutputFuncF16 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
void | CopyToOutputTensor (const Tensor &outputTensor, ITensorHandle *outputTensorHandle) |
const armnn::ConstTensor | GetInputTensor (const LayerBindingId layerId, const InputTensors &inputTensors) |
const armnn::Tensor | GetOutputTensor (const LayerBindingId layerId, const OutputTensors &outputTensors) |
template<LogSeverity Level> | |
void | SetLoggingSinks (bool standardOut, bool debugOut, bool coloured) |
void | ReportError (const std::string &errorMessage, Optional< std::vector< std::string > &> errorMessages) |
void | ReportWarning (const std::string &warningMessage, Optional< std::vector< std::string > &> warningMessages) |
OptimizationResult | ReturnWithError (OptimizationResult res, const Layer *layer, const BackendSettings &backendSettings, Optional< std::vector< std::string > &> errMessages) |
bool | CheckScaleSetOnQuantizedType (Layer *layer, Optional< std::vector< std::string > &> errMessages) |
template<typename LayerT > | |
LayerT * | ConvertBf16ToFp32Weight (Layer *l) |
OptimizationResult | AttemptBackendAssignment (BackendSettings &backendSettings, Graph &graph, Layer *layer, BackendId backend, DataType dataTypeIn, DataType dataTypeOut, const std::vector< BackendId > &availablePreferredBackends, std::string &reasonIfUnsupported, Optional< std::vector< std::string > &> errMessages) |
void | AssignBackendsIConnectable (OptimizedNetworkImpl *optNetObjPtr, IConnectableLayer *it, Optional< std::vector< std::string > &> errMessages, OptimizationResult &result, BackendSettings &backendSettings, std::vector< BackendId > &availablePreferredBackends) |
OptimizationResult | AssignBackends (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, Graph::Iterator &firstLayer, Graph::Iterator &lastLayer, Optional< std::vector< std::string > &> errMessages) |
OptimizationResult | AssignBackends (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, SubgraphView::IConnectableLayerIterator &firstLayer, SubgraphView::IConnectableLayerIterator &lastLayer, Optional< std::vector< std::string > &> errMessages) |
OptimizationResult | AssignBackends (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, SubgraphView &subgraph, Optional< std::vector< std::string > &> errMessages) |
BackendsMap | CreateSupportedBackends (TensorHandleFactoryRegistry &handleFactoryRegistry, BackendSettings &backendSettings) |
OptimizationResult | ApplyBackendOptimizations (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, BackendsMap &backends, const ModelOptions &modelOptions, Optional< std::vector< std::string > &> errMessages) |
bool | RequiresCopy (ITensorHandleFactory::FactoryId src, ITensorHandleFactory::FactoryId dst, TensorHandleFactoryRegistry ®istry) |
ITensorHandleFactory::FactoryId | CalculateSlotOptionForInput (BackendsMap &backends, OutputSlot &slot, TensorHandleFactoryRegistry ®istry, bool importEnabled) |
ITensorHandleFactory::FactoryId | CalculateSlotOptionForOutput (BackendsMap &backends, OutputSlot &slot, TensorHandleFactoryRegistry ®istry) |
ITensorHandleFactory::FactoryId | CalculateSlotOption (BackendsMap &backends, OutputSlot &outputSlot, TensorHandleFactoryRegistry ®istry, bool importEnabled) |
EdgeStrategy | CalculateEdgeStrategy (BackendsMap &backends, ITensorHandleFactory::FactoryId srcFactoryId, const Layer &layer, const Layer &connectedLayer, TensorHandleFactoryRegistry ®istry, bool importEnabled) |
OptimizationResult | SelectTensorHandleStrategy (Graph &optGraph, BackendsMap &backends, TensorHandleFactoryRegistry ®istry, bool importEnabled, Optional< std::vector< std::string > &> errMessages) |
std::vector< ConvertBf16ToFp32Layer * > | InsertConvertBf16ToFp32LayersBefore (Graph &graph, Layer &layer, bool expectCorrectInputType) |
std::vector< ConvertFp32ToBf16Layer * > | InsertConvertFp32ToBf16LayersBefore (Graph &graph, Layer &layer, bool expectCorrectInputType) |
std::vector< ConvertFp16ToFp32Layer * > | InsertConvertFp16ToFp32LayersBefore (Graph &graph, Layer &layer, bool expectCorrectInputType) |
std::vector< ConvertFp32ToBf16Layer * > | InsertConvertFp32ToBf16LayersAfter (Graph &graph, Layer &layer) |
std::vector< ConvertFp32ToFp16Layer * > | InsertConvertFp32ToFp16LayersAfter (Graph &graph, Layer &layer) |
std::vector< DebugLayer * > | InsertDebugLayerAfter (Graph &graph, Layer &layer) |
template<typename T > | |
void | Append (Optimizer::Optimizations &optimizations, T &&optimization) |
template<typename Front , typename... Others> | |
void | Append (Optimizer::Optimizations &optimizations, Front &&front, Others &&... others) |
template<typename... Args> | |
Optimizer::Optimizations | MakeOptimizations (Args &&... args) |
Measurement | FindMeasurement (const std::string &name, const Event *event) |
std::vector< Measurement > | FindKernelMeasurements (const Event *event) |
const Event * | GetEventPtr (const Event *ptr) |
const Event * | GetEventPtr (const std::unique_ptr< Event > &ptr) |
int | CalcLevel (const Event *eventPtr) |
void | ConfigureDetailsObject (JsonChildObject &detailsObject, std::string layerDetailsStr) |
void | ExtractJsonObjects (unsigned int inferenceIndex, const Event *parentEvent, JsonChildObject &parentObject, std::map< const Event *, std::vector< const Event *>> descendantsMap) |
template<typename DescriptorType > | |
void | ProfilingUpdateDescriptions (const std::string &name, const DescriptorType &desc, const WorkloadInfo &infos, const arm::pipe::ProfilingGuid guid) |
template<typename Delegate > | |
void | ForEachLayerInput (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo, Delegate function) |
template<typename Delegate > | |
void | ForEachLayerOutput (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo, Delegate function) |
void | AssignSplitId (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo) |
bool | IsReadyForSplitAssignment (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo) |
TEST_SUITE ("TestConstTensorLayerVisitor") | |
size_t | GetProfilerEventSequenceSize (armnn::IProfiler *profiler) |
void | RuntimeLoadedNetworksReserve (armnn::RuntimeImpl *runtime) |
TEST_SUITE ("TestInputOutputLayerVisitor") | |
void | CheckLayerBindingId (LayerBindingId visitorId, LayerBindingId id) |
bool | IsLayerSupported (const armnn::Layer *layer) |
bool | IsLayerSupported (const armnn::Layer &layer) |
bool | IsLayerOptimizable (const armnn::Layer *layer) |
bool | IsLayerOptimizable (const armnn::Layer &layer) |
constexpr const char * | MockTensorHandleFactoryId () |
Graph & | GetGraphForTesting (IOptimizedNetwork *optNet) |
ModelOptions & | GetModelOptionsForTesting (IOptimizedNetwork *optNet) |
arm::pipe::IProfilingService & | GetProfilingService (armnn::RuntimeImpl *runtime) |
std::ostream & | operator<< (std::ostream &os, const BFloat16 &b) |
void | ReportUntouchedLayers (OptimizationViews &optimizationViews, std::map< LayerGuid, Layer *> untouched) |
template<typename LayerType > | |
LayerType * | FuseLayer (OptimizationViews &optimizationViews, LayerType *baseLayer, LayerType *replacementLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc) |
template<typename LayerType > | |
LayerType * | FuseAdditionLayer (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseSubtractionLayer (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseDivisionLayer (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseMultiplicationLayer (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseBatchNormalizationLayer (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseConvolution2dLayer (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseDepthwiseConvolution2dLayer (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseFullyConnectedLayer (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
std::vector< IConnectableLayer * > | ChainReduceLayers (OptimizationViews &optimizationViews, LayerType *baseLayer, ReduceDescriptor &desc) |
template<typename LayerType > | |
void | ReplaceLayers (OptimizationViews &optimizationViews, LayerType *baseLayer, std::vector< IConnectableLayer *> &layers) |
arm_compute::NormalizationLayerInfo | CreateAclNormalizationLayerInfoForL2Normalization (const armnn::TensorInfo &tensorInfo, armnn::DataLayout dataLayout) |
arm_compute::ActivationLayerInfo::ActivationFunction | ConvertActivationFunctionToAclActivationFunction (ActivationFunction armnnFunction) |
arm_compute::ActivationLayerInfo | ConvertActivationDescriptorToAclActivationLayerInfo (const ActivationDescriptor &actDesc) |
arm_compute::ActivationLayerInfo | ConvertActivationDescriptorToAclActivationLayerInfo (const ActivationDescriptor *activationDescPtr) |
arm_compute::ActivationLayerInfo | ConvertAdditionalInfoToAclActivationLayerInfo (const QueueDescriptor &queueDescriptor) |
arm_compute::ActivationLayerInfo | ConvertLstmActivationFuncToAclLayerInfo (uint32_t activationFunction) |
arm_compute::ComparisonOperation | ConvertComparisonOperationToAcl (const ComparisonDescriptor &descriptor) |
arm_compute::PoolingType | ConvertPoolingAlgorithmToAclPoolingType (PoolingAlgorithm poolingAlgorithm) |
arm_compute::DimensionRoundingType | ConvertOutputShapeRoundingToAclDimensionRoundingType (OutputShapeRounding rounding) |
arm_compute::NormType | ConvertNormalizationAlgorithmChannelToAclNormType (NormalizationAlgorithmChannel channelType) |
arm_compute::FullyConnectedLayerInfo | ConvertFullyConnectedDescriptorToAclFullyConnectedLayerInfo (const FullyConnectedDescriptor &fullyConnectedDesc, const ActivationDescriptor *activationDesc) |
arm_compute::FullyConnectedLayerInfo | ConvertFullyConnectedDescriptorToAclFullyConnectedLayerInfo (const FullyConnectedDescriptor &fullyConnectedDesc, arm_compute::ActivationLayerInfo activationLayerInfo) |
arm_compute::InterpolationPolicy | ConvertResizeMethodToAclInterpolationPolicy (ResizeMethod resizeMethod) |
template<typename T > | |
T | ComputeSoftmaxAclAxis (const SoftmaxDescriptor &softmaxDesc, const armnn::TensorInfo &tensor) |
std::set< unsigned int > | ComputeSplitAxis (const armnn::SplitterDescriptor &desc, const TensorShape &input) |
int | ComputeAclAxis (const int &armnnAxis, const armnn::TensorInfo &tensor) |
Function to convert ArmNN axis (left to right) to ACL axis (right to left) ranging from [-rank, rank) More... | |
unsigned int | ComputePositiveAxis (const int &axis, const armnn::TensorInfo &tensor) |
Function to convert axis to its positive equivalent value. More... | |
arm_compute::Conv3dInfo | ComputeConv3DInfo (const armnn::Convolution3dDescriptor descriptor, bool isFastMathEnabled, const ActivationDescriptor *activationDescriptor) |
Utility function used to setup an arm_compute::Conv3dInfo object from convolution3d descriptor. More... | |
arm_compute::Conv3dInfo | ComputeConv3DInfo (const armnn::Convolution3dQueueDescriptor queueDescriptor, bool isFastMathEnabled) |
arm_compute::PaddingMode | ConvertPaddingModeToAcl (const PaddingMode &paddingMode) |
arm_compute::ReductionOperation | ConvertReductionOperationToAcl (const ReduceDescriptor &descriptor) |
const TensorInfo | ComputeReductionTensorShape (const armnn::TensorInfo &input, const std::vector< uint32_t > &vAxis, const bool keepDims) |
Function to compute the output tensor shape based on the axes and if keepDims is set. More... | |
armnn::Optional< armnn::DataType > | GetBiasTypeFromWeightsType (armnn::Optional< armnn::DataType > weightsType) |
template<typename F > | |
bool | CheckSupportRule (F rule, Optional< std::string &> reasonIfUnsupported, const char *reason) |
template<typename T > | |
bool | AllTypesAreEqualImpl (T) |
template<typename T , typename... Rest> | |
bool | AllTypesAreEqualImpl (T t1, T t2, Rest... rest) |
std::unique_ptr< IMemoryOptimizerStrategy > | GetMemoryOptimizerStrategy (const std::string &strategyName) |
const std::vector< std::string > | GetMemoryOptimizerStrategyNames () |
TEST_SUITE ("MemoryManagerTests") | |
constexpr const char * | MockImportBackendId () |
constexpr const char * | MockBackendId () |
armnn::ConstTensor | PermuteTensor (const ConstTensorHandle *tensor, const PermutationVector &permutationVector, void *permuteBuffer) |
void | ReshapeWeightsForAcl (TensorInfo &weightInfo, DataLayout dataLayout) |
template<typename DataType > | |
ConstTensor | ReorderWeightChannelsForAcl (const ConstTensor &weightHandle, DataLayout dataLayout, void *permuteBuffer) |
TensorInfo | ConvertWeightTensorInfoFromArmnnToAcl (const TensorInfo &weightInfo, DataLayout dataLayout) |
std::tuple< ConstTensor, unsigned int > | Convert1HWOTensorToAcl (const ConstTensorHandle *weightTensor, const TensorInfo &inputInfo, const DataLayout dataLayout, void *permuteBuffer) |
Weights for depthwise have a datalayout of [1,H,W,O] = [1,H,W,I*M] This function coverts a ConstCpuTensorHandle from [1,H,W,I*M] to [1,I*M,H,W] (if NCHW) or keeps it at [1,H,W,I*M] (if NHWC) as required by the compute library. More... | |
std::tuple< TensorInfo, unsigned int > | Convert1HWOTensorInfoToAcl (const TensorInfo &weightInfo, const TensorInfo &inputInfo, const DataLayout dataLayout) |
Weights for depthwise have a datalayout of [1,H,W,O] = [1,H,W,I*M] This function coverts a TensorInfo from [1,H,W,I*M] to [1,I*M,H,W] (if NCHW) or keeps it at [1,H,W,I*M] (if NHWC) as required by the compute library Returns a tuple of converted weights tensor info and depth multiplier. More... | |
std::tuple< ConstTensor, unsigned int > | Convert1HWOtoMIHW (const ConstTensorHandle *weightTensor, const TensorInfo &inputInfo, const DataLayout &dataLayout, void *permuteBuffer) |
Converts a (weights) tensor from [1, H, W, I*M] = [1, H, W, O] to [M, I, H, W]. More... | |
armnn::ConstTensor | ConvertWeightTensorFromArmnnToAcl (const ConstTensorHandle *weightTensor, DataLayout dataLayout, void *permuteBuffer) |
int32_t | ConvertMaskToACLFormat (int32_t mask, int32_t numDim) |
std::map< std::string, unsigned int > | CalculateGatherNdKeyIndices (TensorInfo inputInfo0, TensorInfo inputInfo1) |
Calculates the key index values needed for GatherNd: N, ND, K, W, C (N is always 1) More... | |
template<typename CopyFunc > | |
void | CopyTensorContentsGeneric (const ITensorHandle *srcTensor, ITensorHandle *dstTensor, CopyFunc copy) |
template<typename SrcTensorHandleType , typename DstTensorHandleType , typename DescriptorType > | |
void | GatherTensorHandlePairs (const DescriptorType &descriptor, std::vector< std::pair< SrcTensorHandleType *, DstTensorHandleType *>> &tensorHandlePairs) |
std::string | LowerString (std::string value) |
TuningLevel | ParseTuningLevel (const BackendOptions::Var &value, TuningLevel defaultValue) |
bool | ParseBoolean (const BackendOptions::Var &value, bool defaultValue) |
std::string | ParseFile (const BackendOptions::Var &value, std::string defaultValue) |
void | ConfigureTuner (arm_compute::CLTuner &tuner, TuningLevel level) |
constexpr const char * | ClBackendId () |
flatbuffers::Offset< ClContext > | CreateClContext (flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset< flatbuffers::Vector< flatbuffers::Offset< armnn::Program >>> programs=0) |
flatbuffers::Offset< ClContext > | CreateClContextDirect (flatbuffers::FlatBufferBuilder &_fbb, const std::vector< flatbuffers::Offset< armnn::Program >> *programs=nullptr) |
flatbuffers::Offset< Program > | CreateProgram (flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset< flatbuffers::String > name=0, flatbuffers::Offset< flatbuffers::Vector< uint8_t >> binary=0) |
flatbuffers::Offset< Program > | CreateProgramDirect (flatbuffers::FlatBufferBuilder &_fbb, const char *name=nullptr, const std::vector< uint8_t > *binary=nullptr) |
const armnn::ClContext * | GetClContext (const void *buf) |
const armnn::ClContext * | GetSizePrefixedClContext (const void *buf) |
const char * | ClContextIdentifier () |
bool | ClContextBufferHasIdentifier (const void *buf) |
bool | VerifyClContextBuffer (flatbuffers::Verifier &verifier) |
bool | VerifySizePrefixedClContextBuffer (flatbuffers::Verifier &verifier) |
const char * | ClContextExtension () |
void | FinishClContextBuffer (flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset< armnn::ClContext > root) |
void | FinishSizePrefixedClContextBuffer (flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset< armnn::ClContext > root) |
constexpr const char * | ClImportTensorHandleFactoryId () |
constexpr const char * | ClTensorHandleFactoryId () |
arm_compute::Status | ClAbsWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClActivationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ActivationDescriptor &descriptor) |
arm_compute::Status | ClAdditionValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClArgMinMaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ArgMinMaxDescriptor &descriptor) |
arm_compute::Status | ClBatchNormalizationValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &mean, const TensorInfo &var, const TensorInfo &beta, const TensorInfo &gamma, const BatchNormalizationDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClBatchToSpaceNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const BatchToSpaceNdDescriptor &descriptor) |
arm_compute::Status | ClCastValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClChannelShuffleValidate (const TensorInfo &input, const TensorInfo &output, const ChannelShuffleDescriptor &descriptor) |
arm_compute::Status | ClComparisonWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ComparisonDescriptor &descriptor) |
arm_compute::Status | ClConcatWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const OriginsDescriptor &descriptor) |
arm_compute::Status | ClConstantWorkloadValidate (const TensorInfo &output) |
arm_compute::Status | ClConvertFp16ToFp32WorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClConvertFp32ToFp16WorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Convolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, bool isFastMathEnabled, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClConvolution3dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Convolution3dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, bool isFastMathEnabled, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClDepthToSpaceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthToSpaceDescriptor &descriptor) |
arm_compute::Status | ClDepthwiseConvolutionWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthwiseConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClDequantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClDivisionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClExpWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClFloorWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClFullyConnectedWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &weights, const Optional< TensorInfo > &biases, const FullyConnectedDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClGatherNdWorkloadValidate (const TensorInfo ¶msInfo, const TensorInfo &indicesInfo, const TensorInfo &outputInfo) |
arm_compute::Status | ClGatherWorkloadValidate (const TensorInfo &input, const TensorInfo &indices, const TensorInfo &output, const GatherDescriptor &descriptor) |
arm_compute::Status | ClInstanceNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const InstanceNormalizationDescriptor &descriptor) |
arm_compute::Status | ClL2NormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const L2NormalizationDescriptor &descriptor) |
arm_compute::Status | ClLogicalAndWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClLogicalNotWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClLogicalOrWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClLogSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const LogSoftmaxDescriptor &descriptor) |
arm_compute::Status | ClLogWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClLstmFloatWorkloadValidate (const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &scratchBuffer, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const LstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | ClMaximumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClMeanValidate (const TensorInfo &input, const TensorInfo &output, const MeanDescriptor &descriptor) |
arm_compute::Status | ClMinimumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClMultiplicationWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClNegWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const NormalizationDescriptor &descriptor) |
arm_compute::Status | ClPadValidate (const TensorInfo &input, const TensorInfo &output, const PadDescriptor &descriptor) |
arm_compute::Status | ClPermuteWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const PermuteDescriptor &descriptor) |
arm_compute::Status | ClPooling2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Pooling2dDescriptor &descriptor) |
arm_compute::Status | ClPooling3dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Pooling3dDescriptor &descriptor) |
arm_compute::Status | ClPreluWorkloadValidate (const TensorInfo &input, const TensorInfo &alpha, const TensorInfo &output) |
arm_compute::Status | ClQLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &cellStateIn, const TensorInfo &outputStateIn, const TensorInfo &cellStateOut, const TensorInfo &outputStateOut, const TensorInfo &output, const QLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | ClQuantizedLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &previousCellStateIn, const TensorInfo &previousOutputIn, const TensorInfo &cellStateOut, const TensorInfo &output, const QuantizedLstmInputParamsInfo ¶msInfo) |
arm_compute::Status | ClQuantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClReduceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ReduceDescriptor &descriptor) |
arm_compute::Status | ClReshapeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClResizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ResizeDescriptor &descriptor) |
arm_compute::Status | ClRsqrtWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClSinWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SliceDescriptor &descriptor) |
arm_compute::Status | ClSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SoftmaxDescriptor &descriptor) |
arm_compute::Status | ClSpaceToBatchNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToBatchNdDescriptor &descriptor) |
arm_compute::Status | ClSpaceToDepthWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToDepthDescriptor &descriptor) |
arm_compute::Status | ClSplitterWorkloadValidate (const TensorInfo &input, const std::vector< std::reference_wrapper< TensorInfo >> &outputs, unsigned int splitAxis) |
arm_compute::Status | ClSqrtWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClStackWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const StackDescriptor &descriptor) |
arm_compute::Status | ClStridedSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const StridedSliceDescriptor &descriptor) |
arm_compute::Status | ClSubtractionValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClTransposeConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases) |
arm_compute::Status | ClTransposeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeDescriptor &descriptor) |
arm_compute::Status | ClUnidirectionalSequenceLstmFloatWorkloadValidate (const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &output, const Optional< TensorInfo > &hiddenStateOutput, const Optional< TensorInfo > &cellStateOutput, const UnidirectionalSequenceLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
std::string | GetConvolutionMethodString (arm_compute::ConvolutionMethod &convolutionMethod) |
template<typename T > | |
void | CopyArmComputeClTensorData (arm_compute::CLTensor &dstTensor, const T *srcData) |
auto | SetClStridedSliceData (const std::vector< int > &m_begin, const std::vector< int > &m_end, const std::vector< int > &m_stride) |
auto | SetClSliceData (const std::vector< unsigned int > &m_begin, const std::vector< unsigned int > &m_size) |
void | InitializeArmComputeClTensorData (arm_compute::CLTensor &clTensor, const ConstTensorHandle *handle) |
RuntimeException | WrapClError (const cl::Error &clError, const CheckLocation &location) |
void | RunClFunction (arm_compute::IFunction &function, const CheckLocation &location) |
template<typename DataType , typename PayloadType > | |
DataType * | GetOutputTensorData (unsigned int idx, const PayloadType &data) |
constexpr const char * | NeonBackendId () |
constexpr const char * | NeonTensorHandleFactoryId () |
arm_compute::Status | NeonAbsWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonActivationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ActivationDescriptor &descriptor) |
arm_compute::Status | NeonAdditionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonArgMinMaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ArgMinMaxDescriptor &descriptor) |
arm_compute::Status | NeonBatchNormalizationValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &mean, const TensorInfo &var, const TensorInfo &beta, const TensorInfo &gamma, const BatchNormalizationDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonBatchToSpaceNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const BatchToSpaceNdDescriptor &descriptor) |
arm_compute::Status | NeonCastValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonChannelShuffleValidate (const TensorInfo &input, const TensorInfo &output, const ChannelShuffleDescriptor &descriptor) |
arm_compute::Status | NeonComparisonWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ComparisonDescriptor &descriptor) |
arm_compute::Status | NeonConcatWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const OriginsDescriptor &descriptor) |
arm_compute::Status | NeonConstantWorkloadValidate (const TensorInfo &output) |
arm_compute::Status | NeonConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Convolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, bool isFastMathEnabled, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonConvolution3dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Convolution3dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, bool isFastMathEnabled, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonDepthToSpaceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthToSpaceDescriptor &descriptor) |
arm_compute::Status | NeonDepthwiseConvolutionWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthwiseConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonDequantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::DetectionPostProcessLayerInfo | MakeInfo (const DetectionPostProcessDescriptor &descriptor) |
arm_compute::Status | NeonDetectionPostProcessValidate (const TensorInfo &boxEncodings, const TensorInfo &scores, const TensorInfo &anchors, const TensorInfo &detectionBoxes, const TensorInfo &detectionClasses, const TensorInfo &detectionScores, const TensorInfo &numDetections, const DetectionPostProcessDescriptor &descriptor) |
arm_compute::Status | NeonDivisionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonExpWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonFullyConnectedWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &weights, const Optional< TensorInfo > &biases, const FullyConnectedDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonGatherNdWorkloadValidate (const TensorInfo ¶msInfo, const TensorInfo &indicesInfo, const TensorInfo &outputInfo) |
arm_compute::Status | NeonGatherWorkloadValidate (const TensorInfo &input, const TensorInfo &indices, const TensorInfo &output, const GatherDescriptor &descriptor) |
arm_compute::Status | NeonInstanceNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const InstanceNormalizationDescriptor &descriptor) |
arm_compute::Status | NeonL2NormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const L2NormalizationDescriptor &descriptor) |
arm_compute::Status | NeonLogicalAndWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | NeonLogicalNotWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonLogicalOrWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | NeonLogSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const LogSoftmaxDescriptor &descriptor) |
arm_compute::Status | NeonLogWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonLstmFloatWorkloadValidate (const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &scratchBuffer, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const LstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonMaximumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | NeonMeanWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const MeanDescriptor &descriptor) |
arm_compute::Status | NeonMinimumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
Validate function for validating the inputs and output. More... | |
arm_compute::Status | NeonMultiplicationWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonNegWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const NormalizationDescriptor &descriptor) |
arm_compute::Status | NeonPadWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const PadDescriptor &descriptor) |
arm_compute::Status | NeonPermuteWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const PermuteDescriptor &descriptor) |
arm_compute::Status | NeonPooling2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Pooling2dDescriptor &descriptor) |
arm_compute::Status | NeonPooling3dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Pooling3dDescriptor &descriptor) |
arm_compute::Status | NeonPreluWorkloadValidate (const TensorInfo &input, const TensorInfo &alpha, const TensorInfo &output) |
arm_compute::Status | NeonQLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &cellStateIn, const TensorInfo &outputStateIn, const TensorInfo &cellStateOut, const TensorInfo &outputStateOut, const TensorInfo &output, const QLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonQuantizedLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &cellStateIn, const TensorInfo &outputStateIn, const TensorInfo &cellStateOut, const TensorInfo &outputStateOut, const QuantizedLstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonQuantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonReduceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ReduceDescriptor &descriptor) |
arm_compute::Status | NeonReshapeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonResizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ResizeDescriptor &descriptor) |
arm_compute::Status | NeonRsqrtWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonSinWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SliceDescriptor &descriptor) |
arm_compute::Status | NeonSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SoftmaxDescriptor &descriptor) |
arm_compute::Status | NeonSpaceToBatchNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToBatchNdDescriptor &descriptor) |
arm_compute::Status | NeonSpaceToDepthWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToDepthDescriptor &descriptor) |
arm_compute::Status | NeonSplitterWorkloadValidate (const TensorInfo &input, const std::vector< std::reference_wrapper< TensorInfo >> &outputs, unsigned int splitAxis) |
arm_compute::Status | NeonSqrtWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonStackWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const StackDescriptor &descriptor) |
arm_compute::Status | NeonStridedSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const StridedSliceDescriptor &descriptor) |
arm_compute::Status | NeonSubtractionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonTransposeConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases) |
arm_compute::Status | NeonTransposeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeDescriptor &descriptor) |
arm_compute::Status | NeonUnidirectionalSequenceLstmFloatWorkloadValidate (const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const UnidirectionalSequenceLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonUnidirectionalSequenceLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const UnidirectionalSequenceLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
template<typename T > | |
void | CopyArmComputeTensorData (arm_compute::Tensor &dstTensor, const T *srcData) |
void | InitializeArmComputeTensorData (arm_compute::Tensor &tensor, const ConstTensorHandle *handle) |
auto | SetNeonStridedSliceData (const std::vector< int > &m_begin, const std::vector< int > &m_end, const std::vector< int > &m_stride) |
auto | SetNeonSliceData (const std::vector< unsigned int > &m_begin, const std::vector< unsigned int > &m_size) |
constexpr const char * | RefBackendId () |
constexpr const char * | RefTensorHandleFactoryId () |
template<DataType ArmnnType> | |
bool | IsDataType (const WorkloadInfo &info) |
bool | IsSigned32 (const WorkloadInfo &info) |
bool | IsBFloat16 (const WorkloadInfo &info) |
bool | IsFloat16 (const WorkloadInfo &info) |
bool | IsQSymmS16 (const WorkloadInfo &info) |
bool | IsQSymmS8 (const WorkloadInfo &info) |
bool | IsQAsymmS8 (const WorkloadInfo &info) |
bool | IsQAsymmU8 (const WorkloadInfo &info) |
template<typename QueueDescriptorType > | |
constexpr bool | IsOperationQueueDescriptor (const QueueDescriptorType &) |
template<> | |
constexpr bool | IsOperationQueueDescriptor (const MemCopyQueueDescriptor &) |
template<> | |
constexpr bool | IsOperationQueueDescriptor (const ConstantQueueDescriptor &) |
template<> | |
constexpr bool | IsOperationQueueDescriptor (const PermuteQueueDescriptor &) |
float | Activation (float in, ActivationFunction function, float a, float b) |
void | Activation (Decoder< float > &in, Encoder< float > &out, const TensorInfo &tensorInfo, ActivationFunction function, float a, float b) |
template<typename OUT > | |
void | ArgMinMax (Decoder< float > &in, OUT *out, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, ArgMinMaxFunction function, int axis) |
template void | ArgMinMax (Decoder< float > &in, int32_t *out, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, ArgMinMaxFunction function, int axis) |
template void | ArgMinMax (Decoder< float > &in, int64_t *out, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, ArgMinMaxFunction function, int axis) |
void | BatchNormImpl (const BatchNormalizationQueueDescriptor &data, Decoder< float > &meanDecoder, Decoder< float > &varianceDecoder, Decoder< float > &betaDecoder, Decoder< float > &gammaDecoder, Decoder< float > &inputDecoder, Encoder< float > &outputEncoder) |
unsigned int | Offset (const TensorShape &shape, unsigned int batch, unsigned int height, unsigned int width, unsigned int channels, const DataLayoutIndexed &dataLayout) |
void | BatchToSpaceNd (const DataLayoutIndexed &dataLayout, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, const std::vector< unsigned int > &blockShape, const std::vector< std::pair< unsigned int, unsigned int >> &cropsData, Decoder< float > &inputDecoder, Encoder< float > &outputEncoder) |
void | Concatenate (const ConcatQueueDescriptor &data, std::vector< ITensorHandle *> inputs, std::vector< ITensorHandle *> outputs) |
void | Convolve3d (const TensorShape &rInputShape, Decoder< float > &rInputDecoder, const TensorShape &rOutputShape, Encoder< float > &rOutputEncoder, const TensorShape &rFilterShape, Decoder< float > &rFilterDecoder, bool biasEnabled, Decoder< float > *pBiasDecoder, DataLayout dataLayout, unsigned int paddingTop, unsigned int paddingLeft, unsigned int paddingFront, unsigned int xStride, unsigned int yStride, unsigned int zStride, unsigned int xDilation, unsigned int yDilation, unsigned int zDilation) |
void | Convolve (const TensorShape &rInputShape, Decoder< float > &rInputDecoder, const TensorShape &rOutputShape, Encoder< float > &rOutputEncoder, const TensorShape &rFilterShape, Decoder< float > &rFilterDecoder, bool biasEnabled, Decoder< float > *pBiasDecoder, DataLayout dataLayout, unsigned int paddingTop, unsigned int paddingLeft, unsigned int xStride, unsigned int yStride, unsigned int xDilation, unsigned int yDilation, bool depthwise) |
template<typename T > | |
void | Debug (const TensorInfo &inputInfo, const T *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< BFloat16 > (const TensorInfo &inputInfo, const BFloat16 *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< Half > (const TensorInfo &inputInfo, const Half *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< float > (const TensorInfo &inputInfo, const float *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< uint8_t > (const TensorInfo &inputInfo, const uint8_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< int8_t > (const TensorInfo &inputInfo, const int8_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< int16_t > (const TensorInfo &inputInfo, const int16_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< int32_t > (const TensorInfo &inputInfo, const int32_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template<typename T > | |
std::unique_ptr< Decoder< T > > | MakeDecoder (const TensorInfo &info, const void *data=nullptr) |
template<> | |
std::unique_ptr< Decoder< float > > | MakeDecoder (const TensorInfo &info, const void *data) |
template<> | |
std::unique_ptr< Decoder< bool > > | MakeDecoder (const TensorInfo &info, const void *data) |
template<> | |
std::unique_ptr< Decoder< int32_t > > | MakeDecoder (const TensorInfo &info, const void *data) |
void | DepthToSpace (const TensorInfo &inputInfo, const DepthToSpaceDescriptor &descriptor, const void *inputData, void *outputData, unsigned int dataTypeSize) |
void | Dequantize (Decoder< float > &inputDecoder, Encoder< float > &outputEncoder, const TensorInfo &inputInfo, const TensorInfo &outputInfo) |
std::vector< unsigned int > | GenerateRangeK (unsigned int k) |
void | TopKSort (unsigned int k, unsigned int *indices, const float *values, unsigned int numElement) |
float | IntersectionOverUnion (const float *boxI, const float *boxJ) |
std::vector< unsigned int > | NonMaxSuppression (unsigned int numBoxes, const std::vector< float > &boxCorners, const std::vector< float > &scores, float nmsScoreThreshold, unsigned int maxDetection, float nmsIouThreshold) |
void | AllocateOutputData (unsigned int numOutput, unsigned int numSelected, const std::vector< float > &boxCorners, const std::vector< unsigned int > &outputIndices, const std::vector< unsigned int > &selectedBoxes, const std::vector< unsigned int > &selectedClasses, const std::vector< float > &selectedScores, float *detectionBoxes, float *detectionScores, float *detectionClasses, float *numDetections) |
void | DetectionPostProcess (const TensorInfo &boxEncodingsInfo, const TensorInfo &scoresInfo, const TensorInfo &anchorsInfo, const TensorInfo &detectionBoxesInfo, const TensorInfo &detectionClassesInfo, const TensorInfo &detectionScoresInfo, const TensorInfo &numDetectionsInfo, const DetectionPostProcessDescriptor &desc, Decoder< float > &boxEncodings, Decoder< float > &scores, Decoder< float > &anchors, float *detectionBoxes, float *detectionClasses, float *detectionScores, float *numDetections) |
template<typename T > | |
std::unique_ptr< Encoder< T > > | MakeEncoder (const TensorInfo &info, void *data=nullptr) |
template<> | |
std::unique_ptr< Encoder< float > > | MakeEncoder (const TensorInfo &info, void *data) |
template<> | |
std::unique_ptr< Encoder< bool > > | MakeEncoder (const TensorInfo &info, void *data) |
template<> | |
std::unique_ptr< Encoder< int32_t > > | MakeEncoder (const TensorInfo &info, void *data) |
void | Fill (Encoder< float > &output, const TensorShape &desiredOutputShape, const float value) |
Creates a tensor and fills it with a scalar value. More... | |
void | FullyConnected (const TensorShape &rInputShape, Decoder< float > &rInputDecoder, const TensorShape &rOutputShape, Encoder< float > &rOutputEncoder, const TensorShape &rWeightsShape, Decoder< float > &rWeightDecoder, Decoder< float > *rBiasDecoder, bool biasEnabled, unsigned int K, bool transposeWeights) |
Performs a matrix multiplication and optionally adds a bias. More... | |
void | Gather (const TensorInfo ¶msInfo, const TensorInfo &indicesInfo, const TensorInfo &outputInfo, Decoder< float > ¶ms, const int32_t *indices, Encoder< float > &output, const int32_t axis) |
void | InstanceNorm (const InstanceNormalizationQueueDescriptor &data, const TensorInfo &inputInfo, Decoder< float > &inputDecoder, Encoder< float > &outputEncoder) |
void | LogSoftmax (Decoder< float > &input, Encoder< float > &output, const TensorInfo &inputInfo, const LogSoftmaxDescriptor &descriptor) |
void | LstmImpl (const LstmDescriptor &descriptor, const TensorInfo &inputInfo, const TensorInfo &outputInfo, const TensorShape &inputToOutputWeightsShape, const TensorShape &recurrentToOutputWeightsShape, std::unique_ptr< Decoder< float >> &inputData, std::unique_ptr< Decoder< float >> &outputStateIn, std::unique_ptr< Decoder< float >> &cellStateIn, std::unique_ptr< Encoder< float >> &outputStateOut, std::unique_ptr< Encoder< float >> &cellStateOut, std::unique_ptr< Encoder< float >> &output, std::unique_ptr< Decoder< float >> &cellStateOutDecoder, std::unique_ptr< Decoder< float >> &outputDecoder, std::unique_ptr< Decoder< float >> &inputToInputWeightsTensor, std::unique_ptr< Decoder< float >> &inputToForgetWeightsTensor, std::unique_ptr< Decoder< float >> &inputToCellWeightsTensor, std::unique_ptr< Decoder< float >> &inputToOutputWeightsTensor, std::unique_ptr< Decoder< float >> &recurrentToInputWeightsTensor, std::unique_ptr< Decoder< float >> &recurrentToForgetWeightsTensor, std::unique_ptr< Decoder< float >> &recurrentToCellWeightsTensor, std::unique_ptr< Decoder< float >> &recurrentToOutputWeightsTensor, std::unique_ptr< Decoder< float >> &cellToInputWeightsTensor, std::unique_ptr< Decoder< float >> &cellToForgetWeightsTensor, std::unique_ptr< Decoder< float >> &cellToOutputWeightsTensor, std::unique_ptr< Decoder< float >> &inputGateBiasTensor, std::unique_ptr< Decoder< float >> &forgetGateBiasTensor, std::unique_ptr< Decoder< float >> &cellBiasTensor, std::unique_ptr< Decoder< float >> &outputGateBiasTensor, std::unique_ptr< Decoder< float >> &projectionWeightsTensor, std::unique_ptr< Decoder< float >> &projectionBiasTensor, std::unique_ptr< Decoder< float >> &inputLayerNormWeights, std::unique_ptr< Decoder< float >> &forgetLayerNormWeights, std::unique_ptr< Decoder< float >> &cellLayerNormWeights, std::unique_ptr< Decoder< float >> &outputLayerNormWeights, std::unique_ptr< Encoder< float >> &inputGateScratch, std::unique_ptr< Encoder< float >> &cellScratch, std::unique_ptr< Encoder< float >> &forgetGateScratch, std::unique_ptr< Encoder< float >> &outputGateScratch, std::unique_ptr< Decoder< float >> &inputGateScratchDecoder, std::unique_ptr< Decoder< float >> &cellScratchDecoder, std::unique_ptr< Decoder< float >> &forgetGateScratchDecoder, std::unique_ptr< Decoder< float >> &outputGateScratchDecoder, float layerNormEpsilon) |
void | MirrorPad (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const ITensorHandle *inputHandle, ITensorHandle *outputHandle, const PadQueueDescriptor &data) |
void | Pad (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const ITensorHandle *inputHandle, ITensorHandle *outputHandle, const PadQueueDescriptor &data) |
void | Pooling2d (Decoder< float > &rInputDecoder, Encoder< float > &rOutputEncoder, const TensorInfo &inputInfo, const TensorInfo &outputInfo, const Pooling2dDescriptor ¶ms) |
Computes the Pooling2d operation. More... | |
void | Pooling3d (Decoder< float > &rInputDecoder, Encoder< float > &rOutputEncoder, const TensorInfo &inputInfo, const TensorInfo &outputInfo, const Pooling3dDescriptor ¶ms) |
Computes the Pooling3d operation. More... | |
void | PreluImpl (const TensorInfo &inputInfo, const TensorInfo &alphaInfo, const TensorInfo &outputInfo, Decoder< float > &inputData, Decoder< float > &alphaData, Encoder< float > &outputData) |
bool | NextIndex (const unsigned int numDims, const armnn::TensorShape &dims, std::vector< unsigned int > ¤t) |
unsigned int | ReducedOutputOffset (const unsigned int numDims, const armnn::TensorShape &dims, std::vector< unsigned int > &index, const unsigned int numAxis, const std::vector< unsigned int > &axis) |
void | Reduce (const TensorInfo &inputInfo, const TensorInfo &outputInfo, Decoder< float > &input, Encoder< float > &output, const std::vector< uint32_t > axis, const ReduceOperation reduceOperation) |
void | FakeQuantization (const float *inputData, float *outputData, uint32_t numElements, float min, float max) |
unsigned int | GetNumActivations (const TensorInfo &inputInfo) |
const TensorInfo & | GetTensorInfo (const ITensorHandle *tensorHandle) |
float32 helpers More... | |
template<typename DataType , typename PayloadType > | |
const DataType * | GetInputTensorData (unsigned int idx, const PayloadType &data) |
template<typename DataType > | |
DataType * | GetOutputTensorData (ITensorHandle *tensorHandle) |
template<typename PayloadType > | |
const float * | GetInputTensorDataFloat (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
float * | GetOutputTensorDataFloat (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
const Half * | GetInputTensorDataHalf (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
Half * | GetOutputTensorDataHalf (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
const BFloat16 * | GetInputTensorDataBFloat16 (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
BFloat16 * | GetOutputTensorDataBFloat16 (unsigned int idx, const PayloadType &data) |
template<typename T > | |
std::vector< float > | Dequantize (const T *quant, const TensorInfo &info) |
u8 helpers More... | |
template<typename T > | |
void | Dequantize (const T *inputData, float *outputData, const TensorInfo &info) |
void | Quantize (uint8_t *quant, const float *dequant, const TensorInfo &info) |
void | Resize (Decoder< float > &in, const TensorInfo &inputInfo, Encoder< float > &out, const TensorInfo &outputInfo, DataLayoutIndexed dataLayout, armnn::ResizeMethod resizeMethod, bool alignCorners, bool halfPixelCenters) |
void | Slice (const TensorInfo &inputInfo, const SliceDescriptor &descriptor, const void *inputData, void *outputData, unsigned int dataTypeSize) |
void | Softmax (Decoder< float > &in, Encoder< float > &out, const TensorInfo &inputTensorInfo, float beta, int axis) |
Computes the softmax function on some inputs, into outputs, with a shape given by tensorInfo. More... | |
unsigned int | GetOffset (const TensorShape &shape, unsigned int b, unsigned int h, unsigned int w, unsigned int c, const DataLayoutIndexed &dataLayout) |
void | SpaceToBatchNd (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const SpaceToBatchNdDescriptor ¶ms, Decoder< float > &inputData, Encoder< float > &outputData) |
void | SpaceToDepth (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const SpaceToDepthDescriptor ¶ms, Decoder< float > &inputData, Encoder< float > &outputData) |
void | Split (const SplitterQueueDescriptor &data, std::vector< ITensorHandle *> inputs, std::vector< ITensorHandle *> outputs) |
template<typename DataType > | |
void | Splitter (const SplitterQueueDescriptor &data, std::vector< ITensorHandle *> inputs, std::vector< ITensorHandle *> outputs) |
void | Stack (const StackQueueDescriptor &data, std::vector< std::unique_ptr< Decoder< float >>> &inputs, Encoder< float > &output, const TensorInfo &inputInfo, const TensorInfo &outputInfo) |
void | StridedSlice (const TensorInfo &inputInfo, const StridedSliceDescriptor ¶ms, const void *inputData, void *outputData, unsigned int dataTypeSize) |
void | TransposeConvolution2dImpl (const TransposeConvolution2dDescriptor &descriptor, const TensorShape &inputShape, Decoder< float > &inputDecoder, const TensorShape &outputShape, Encoder< float > &outputEncoder, const TensorShape &weightsShape, Decoder< float > &weightsDecoder, Decoder< float > *biasesDecoder) |
std::istream & | operator>> (std::istream &in, armnn::Compute &compute) |
std::istream & | operator>> (std::istream &in, armnn::BackendId &backend) |
Variables | |
constexpr unsigned int | MaxNumOfTensorDimensions = 5U |
constexpr unsigned int | LOWEST_CAPTURE_PERIOD = 10000u |
The lowest performance data capture interval we support is 10 miliseconds. More... | |
constexpr unsigned int | EXPIRE_RATE = 3U |
Variable to control expire rate of priority queue. More... | |
constexpr std::size_t | g_ProfilingEventCountHint = 1024 |
constexpr bool | g_WriteProfilingEventSequence = true |
constexpr bool | g_AggregateProfilingEventsByInference = true |
constexpr bool | g_WriteReportToStdOutOnProfilerDestruction = false |
thread_local IProfiler * | tl_Profiler = nullptr |
constexpr size_t | wordSize = sizeof(size_t) * 8 |
const BackendCapabilities | gpuAccCapabilities ("GpuAcc", { {"NonConstWeights", false}, {"AsyncExecution", false}, {"ProtectedContentAllocation", true}, {"ConstantTensorsAsInputs", true}, {"PreImportIOTensors", false}, {"ExternallyManagedMemory", true}, {"MultiAxisPacking", false}, {"SingleAxisPacking", true} }) |
const BackendCapabilities | cpuAccCapabilities ("CpuAcc", { {"NonConstWeights", false}, {"AsyncExecution", false}, {"ProtectedContentAllocation", false}, {"ConstantTensorsAsInputs", true}, {"PreImportIOTensors", false}, {"ExternallyManagedMemory", true}, {"MultiAxisPacking", false}, {"SingleAxisPacking", true} }) |
const std::set< armnn::LayerType > | paddingRequiredLayers |
const BackendCapabilities | cpuRefCapabilities ("CpuRef", { {"NonConstWeights", true}, {"AsyncExecution", true}, {"ProtectedContentAllocation", false}, {"ConstantTensorsAsInputs", true}, {"PreImportIOTensors", true}, {"ExternallyManagedMemory", true}, {"MultiAxisPacking", false}, {"SingleAxisPacking", true} }) |
const std::set< armnn::BackendCapability > | oldCpuRefCapabilities |
Copyright (c) 2021 ARM Limited and Contributors.
Optional is a drop in replacement for std::optional until we migrate to c++-17.
Create pages for each tool so they appear nicely in the doxygen tree-view.
All rights reserved.
SPDX-License-Identifier: MIT
Subpages are not listed there. Also we can overwrite the page name this way.
Subpages are not listed there.
Note: The parser, serializer and deserializer pages are created in 01_parsers.dox or 02_deserializer_serializer.dox
Only a subset of the optional features are implemented that we intend to use in ArmNN. There are two distinct implementations here:
1, for normal constructable/destructable types and reference types 2, for reference types The std::optional features we support are:
using ACLMemManagerOnDemand = std::shared_ptr<arm_compute::MemoryManagerOnDemand> |
Definition at line 22 of file NeonFullyConnectedWorkload.cpp.
using AdditionalInfoObjectPtr = std::shared_ptr<void> |
using BackendCapabilities = BackendOptions |
Definition at line 19 of file BackendOptions.hpp.
using BackendIdSet = std::unordered_set<BackendId> |
Definition at line 193 of file BackendId.hpp.
using BackendIdVector = std::vector<BackendId> |
Definition at line 192 of file BackendId.hpp.
using BackendsMap = std::map<BackendId, std::unique_ptr<class IBackendInternal> > |
Definition at line 294 of file Network.hpp.
using BaseFloat32ComparisonWorkload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float32, armnn::DataType::Boolean> |
Definition at line 212 of file Workload.hpp.
using BaseUint8ComparisonWorkload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::QAsymmU8, armnn::DataType::Boolean> |
Definition at line 217 of file Workload.hpp.
using BFloat16ToFloat32Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::BFloat16, armnn::DataType::Float32> |
Definition at line 222 of file Workload.hpp.
using BindingPointInfo = std::pair<armnn::LayerBindingId, armnn::TensorInfo> |
Definition at line 274 of file Tensor.hpp.
Definition at line 207 of file Workload.hpp.
typedef std::function< void(const void *)> CompiledBlobDeleter |
Definition at line 244 of file INetwork.hpp.
typedef std::unique_ptr< void, CompiledBlobDeleter > CompiledBlobPtr |
Definition at line 245 of file INetwork.hpp.
using ConcatDescriptor = OriginsDescriptor |
Definition at line 55 of file DescriptorsFwd.hpp.
using Coordinates = std::array<unsigned int, MaxNumOfTensorDimensions> |
Definition at line 15 of file InternalTypes.hpp.
using CopyAndImportFactoryPairs = std::map<ITensorHandleFactory::FactoryId, ITensorHandleFactory::FactoryId> |
Definition at line 19 of file TensorHandleFactoryRegistry.hpp.
using DebugCallbackFunction = std::function<void(LayerGuid guid, unsigned int slotIndex, ITensorHandle* tensorHandle)> |
Define the type of callback for the Debug layer to call.
guid | - guid of layer connected to the input of the Debug layer |
slotIndex | - index of the output slot connected to the input of the Debug layer |
tensorHandle | - TensorHandle for the input tensor to the Debug layer |
A DepthToSpaceDescriptor for the DepthToSpaceLayer.
Definition at line 1080 of file Descriptors.hpp.
using Dimensions = std::array<unsigned int, MaxNumOfTensorDimensions> |
Definition at line 16 of file InternalTypes.hpp.
using DynamicBackendPtr = std::unique_ptr<DynamicBackend> |
Definition at line 52 of file DynamicBackend.hpp.
Definition at line 12 of file MockTensorHandleFactory.cpp.
using Float16ToFloat32Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float16, armnn::DataType::Float32> |
Definition at line 232 of file Workload.hpp.
using Float32ToBFloat16Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float32, armnn::DataType::BFloat16> |
Definition at line 227 of file Workload.hpp.
using Float32ToFloat16Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float32, armnn::DataType::Float16> |
Definition at line 237 of file Workload.hpp.
Definition at line 198 of file Workload.hpp.
using FloatWorkload = TypedWorkload<QueueDescriptor, armnn::DataType::Float16, armnn::DataType::Float32> |
Definition at line 195 of file Workload.hpp.
using HighResolutionClock = std::chrono::high_resolution_clock::time_point |
using IBackendContextUniquePtr = std::unique_ptr<IBackendContext> |
Definition at line 34 of file IBackendContext.hpp.
typedef std::unique_ptr< IBackendInternal > IBackendInternalUniquePtr |
Definition at line 32 of file BackendRegistry.hpp.
using IBackendSharedPtr = std::shared_ptr<IBackend> |
using IBackendUniquePtr = std::unique_ptr<IBackend, void(*)(IBackend* backend)> |
using IGpuAccTunedParametersPtr = std::shared_ptr<IGpuAccTunedParameters> |
The following API is replaced by the backend options API.
Definition at line 295 of file IRuntime.hpp.
using IInitialiseProfilingService = arm::pipe::IInitialiseProfilingService |
Definition at line 28 of file Runtime.hpp.
using ILayerSupportSharedPtr = std::shared_ptr<ILayerSupport> |
Definition at line 572 of file ILayerSupport.hpp.
using IMemoryManagerUniquePtr = std::unique_ptr<IMemoryManager> |
Definition at line 24 of file IMemoryManager.hpp.
using ImportedInputId = unsigned int |
using ImportedOutputId = unsigned int |
using INetworkPtr = std::unique_ptr<INetwork, void(*)(INetwork* network)> |
Definition at line 241 of file INetwork.hpp.
using InferenceTimingPair = std::pair<HighResolutionClock, HighResolutionClock> |
Definition at line 91 of file WorkloadData.hpp.
using InputTensors = std::vector<std::pair<LayerBindingId, class ConstTensor> > |
Definition at line 392 of file Tensor.hpp.
typedef ConstPassthroughTensorHandle instead |
Definition at line 255 of file TensorHandle.hpp.
Definition at line 204 of file Workload.hpp.
using IOptimizedNetworkPtr = std::unique_ptr<IOptimizedNetwork, void(*)(IOptimizedNetwork* network)> |
Definition at line 242 of file INetwork.hpp.
using IReportStructure = arm::pipe::IReportStructure |
Definition at line 27 of file Runtime.hpp.
using IRuntimePtr = std::unique_ptr<IRuntime, void(*)(IRuntime* runtime)> |
Definition at line 33 of file IRuntime.hpp.
using LayerBindingId = int |
using LayerPriority = unsigned int |
using LayerTypeOf = typename LayerTypeOfImpl<Type>::Type |
Definition at line 90 of file LayersFwd.hpp.
using LoadedNetworks = std::unordered_map<NetworkId, std::unique_ptr<LoadedNetwork> > |
Definition at line 26 of file Runtime.hpp.
A LogSoftmaxDescriptor for the LogSoftmaxLayer.
Definition at line 169 of file Descriptors.hpp.
using MemoryOptimizerStrategiesMapRef = std::unordered_map<BackendId, std::shared_ptr<IMemoryOptimizerStrategy> > |
Definition at line 33 of file BackendRegistry.hpp.
using MemorySourceFlags = unsigned int |
Definition at line 15 of file MemorySources.hpp.
using MergerDescriptor = OriginsDescriptor |
MergerDescriptor is deprecated, use ConcatDescriptor instead.
Definition at line 59 of file DescriptorsFwd.hpp.
Definition at line 149 of file WorkloadData.hpp.
using ModelOptions = std::vector<BackendOptions> |
Definition at line 18 of file BackendOptions.hpp.
typedef int NetworkId |
Definition at line 27 of file IRuntime.hpp.
using NetworkImplPtr = std::unique_ptr<NetworkImpl, void (*)(NetworkImpl* network)> |
Definition at line 28 of file Network.hpp.
using NetworkOptions = std::vector<BackendOptions> |
Definition at line 16 of file BackendOptions.hpp.
Definition at line 92 of file WorkloadData.hpp.
using OutputTensors = std::vector<std::pair<LayerBindingId, class Tensor> > |
Definition at line 393 of file Tensor.hpp.
using ParameterStringifyFunction = std::function<void(const std::string& name, const std::string& value)> |
Definition at line 14 of file SerializeLayerParameters.hpp.
using PreCompiledObjectDeleter = std::function<void(const void*)> |
Definition at line 19 of file PreCompiledLayer.hpp.
using PreCompiledObjectPtr = std::unique_ptr<void, PreCompiledObjectDeleter> |
Definition at line 20 of file PreCompiledLayer.hpp.
using RefAdditionWorkload = RefElementwiseWorkload<std::plus<DataType>, AdditionQueueDescriptor, StringMapping::RefAdditionWorkload_Execute> |
Definition at line 40 of file RefElementwiseWorkload.hpp.
Definition at line 42 of file RefDebugWorkload.hpp.
Definition at line 43 of file RefDebugWorkload.hpp.
Definition at line 44 of file RefDebugWorkload.hpp.
Definition at line 46 of file RefDebugWorkload.hpp.
Definition at line 45 of file RefDebugWorkload.hpp.
Definition at line 47 of file RefDebugWorkload.hpp.
Definition at line 48 of file RefDebugWorkload.hpp.
Definition at line 49 of file RefDebugWorkload.hpp.
using RefDivisionWorkload = RefElementwiseWorkload<std::divides<DataType>, DivisionQueueDescriptor, StringMapping::RefDivisionWorkload_Execute> |
Definition at line 58 of file RefElementwiseWorkload.hpp.
using RefMaximumWorkload = RefElementwiseWorkload<armnn::maximum<DataType>, MaximumQueueDescriptor, StringMapping::RefMaximumWorkload_Execute> |
Definition at line 64 of file RefElementwiseWorkload.hpp.
using RefMinimumWorkload = RefElementwiseWorkload<armnn::minimum<DataType>, MinimumQueueDescriptor, StringMapping::RefMinimumWorkload_Execute> |
Definition at line 70 of file RefElementwiseWorkload.hpp.
using RefMultiplicationWorkload = RefElementwiseWorkload<std::multiplies<DataType>, MultiplicationQueueDescriptor, StringMapping::RefMultiplicationWorkload_Execute> |
Definition at line 52 of file RefElementwiseWorkload.hpp.
Definition at line 33 of file RefPermuteWorkload.hpp.
Definition at line 34 of file RefPermuteWorkload.hpp.
Definition at line 35 of file RefPermuteWorkload.hpp.
Definition at line 37 of file RefPermuteWorkload.hpp.
Definition at line 36 of file RefPermuteWorkload.hpp.
Definition at line 38 of file RefPermuteWorkload.hpp.
using RefSubtractionWorkload = RefElementwiseWorkload<std::minus<DataType>, SubtractionQueueDescriptor, StringMapping::RefSubtractionWorkload_Execute> |
Definition at line 46 of file RefElementwiseWorkload.hpp.
Definition at line 33 of file RefTransposeWorkload.hpp.
Definition at line 34 of file RefTransposeWorkload.hpp.
Definition at line 35 of file RefTransposeWorkload.hpp.
Definition at line 37 of file RefTransposeWorkload.hpp.
Definition at line 36 of file RefTransposeWorkload.hpp.
Definition at line 38 of file RefTransposeWorkload.hpp.
using ResolveType = typename ResolveTypeImpl<DT>::Type |
Definition at line 79 of file ResolveType.hpp.
using SplitterDescriptor = ViewsDescriptor |
Definition at line 60 of file DescriptorsFwd.hpp.
using TensorInfos = std::vector<TensorInfo> |
Definition at line 151 of file BackendHelper.cpp.
using Uint8ToFloat32Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::QAsymmU8, armnn::DataType::Float32> |
Definition at line 242 of file Workload.hpp.
Definition at line 201 of file Workload.hpp.
Definition at line 1150 of file Descriptors.hpp.
using WorkloadQueue = std::vector< std::unique_ptr<IWorkload> > |
Definition at line 13 of file ExecutionFrame.hpp.
|
strong |
|
strong |
|
strong |
BackendCapability class.
Enumerator | |
---|---|
NonConstWeights | Constant weights can be accessed through the descriptors, On the other hand, non-const weights can be accessed through inputs. |
AsyncExecution | Asynchronous Execution. |
Definition at line 267 of file Types.hpp.
|
strong |
|
strong |
Capability class to calculate in the GetCapabilities function so that only the capability in the scope can be choose to calculate.
Enumerator | |
---|---|
PaddingRequired | |
FallbackImportDisabled | |
CapabilityClassMax |
Definition at line 20 of file ITensorHandleFactory.hpp.
|
strong |
|
strong |
The Compute enum is now deprecated and it is now being replaced by BackendId.
Enumerator | |
---|---|
Undefined | |
CpuRef | CPU Execution: Reference C++ kernels. |
CpuAcc | CPU Execution: NEON: ArmCompute. |
GpuAcc | GPU Execution: OpenCL: ArmCompute. |
Definition at line 21 of file BackendId.hpp.
|
strong |
|
strong |
|
strong |
|
strong |
Definition at line 100 of file ITensorHandleFactory.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
Measurement | |
Event | |
ExecObjectDesc |
Definition at line 20 of file JsonPrinter.hpp.
|
strong |
When adding a new layer, adapt also the LastLayer enum value in the enum class LayerType below.
Definition at line 467 of file Types.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
Trace | |
Debug | |
Info | |
Warning | |
Error | |
Fatal |
Definition at line 14 of file Utils.hpp.
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
The padding method modifies the output of pooling layers.
In both supported methods, the values are ignored (they are not even zeroes, which would make a difference for max pooling a tensor with negative values). The difference between IgnoreValue and Exclude is that the former counts the padding fields in the divisor of Average and L2 pooling, while Exclude does not.
Enumerator | |
---|---|
IgnoreValue | The padding fields count, but are ignored. |
Exclude | The padding fields don't count and are ignored. |
Definition at line 174 of file Types.hpp.
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
The ShapeInferenceMethod modify how the output shapes are treated.
When ValidateOnly is selected, the output shapes are inferred from the input parameters of the layer and any mismatch is reported. When InferAndValidate is selected 2 actions are performed: (1)infer output shape from inputs and (2)validate the shapes as in ValidateOnly. This option has been added to work with tensors which rank or dimension sizes are not specified explicitly, however this information can be calculated from the inputs.
Enumerator | |
---|---|
ValidateOnly | Validate all output shapes. |
InferAndValidate | Infer missing output shapes and validate all output shapes. |
Definition at line 221 of file Types.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
None | |
Rapid | |
Normal | |
Exhaustive |
Definition at line 70 of file ClBackendContext.cpp.
|
strong |
float Activation | ( | float | in, |
ActivationFunction | function, | ||
float | a, | ||
float | b | ||
) |
Definition at line 13 of file Activation.cpp.
References Abs, BoundedReLu, Elu, HardSwish, LeakyReLu, Linear, ReLu, Sigmoid, SoftReLu, Sqrt, Square, and TanH.
Referenced by Activation(), LstmImpl(), and TEST_SUITE().
void Activation | ( | Decoder< float > & | in, |
Encoder< float > & | out, | ||
const TensorInfo & | tensorInfo, | ||
ActivationFunction | function, | ||
float | a, | ||
float | b | ||
) |
Definition at line 95 of file Activation.cpp.
References Activation(), Decoder< IType >::Get(), TensorInfo::GetNumElements(), and Encoder< IType >::Set().
void armnn::AllocateOutputData | ( | unsigned int | numOutput, |
unsigned int | numSelected, | ||
const std::vector< float > & | boxCorners, | ||
const std::vector< unsigned int > & | outputIndices, | ||
const std::vector< unsigned int > & | selectedBoxes, | ||
const std::vector< unsigned int > & | selectedClasses, | ||
const std::vector< float > & | selectedScores, | ||
float * | detectionBoxes, | ||
float * | detectionScores, | ||
float * | detectionClasses, | ||
float * | numDetections | ||
) |
Definition at line 102 of file DetectionPostProcess.cpp.
References numeric_cast().
Referenced by DetectionPostProcess().
bool armnn::AllTypesAreEqualImpl | ( | T | ) |
Definition at line 59 of file LayerSupportRules.hpp.
Referenced by AllTypesAreEqualImpl(), and TypesAreEqual::TypesAreEqual().
bool armnn::AllTypesAreEqualImpl | ( | T | t1, |
T | t2, | ||
Rest... | rest | ||
) |
Definition at line 65 of file LayerSupportRules.hpp.
References AllTypesAreEqualImpl().
void armnn::Append | ( | Optimizer::Optimizations & | optimizations, |
T && | optimization | ||
) |
Definition at line 30 of file Optimizer.hpp.
Referenced by Append(), and MakeOptimizations().
void armnn::Append | ( | Optimizer::Optimizations & | optimizations, |
Front && | front, | ||
Others &&... | others | ||
) |
Definition at line 36 of file Optimizer.hpp.
References Append().
OptimizationResult armnn::ApplyBackendOptimizations | ( | OptimizedNetworkImpl * | optNetObjPtr, |
BackendSettings & | backendSettings, | ||
BackendsMap & | backends, | ||
const ModelOptions & | modelOptions, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 1139 of file Network.cpp.
References ARMNN_ASSERT, ARMNN_SCOPED_PROFILING_EVENT, AssignBackends(), CpuAcc, Layer::GetBackendId(), OptimizedNetworkImpl::GetGraph(), SubgraphView::GetIConnectableLayers(), Layer::GetType(), GpuAcc, Input, OptimizationResult::m_Error, BackendSettings::m_SelectedBackends, MakeOptimizations(), Output, Optimizer::Pass(), ReportWarning(), SubgraphViewSelector::SelectSubgraphs(), Graph::SubstituteSubgraph(), and Undefined.
Referenced by Optimize().
void ArgMinMax | ( | Decoder< float > & | in, |
OUT * | out, | ||
const TensorInfo & | inputTensorInfo, | ||
const TensorInfo & | outputTensorInfo, | ||
ArgMinMaxFunction | function, | ||
int | axis | ||
) |
Definition at line 16 of file ArgMinMax.cpp.
References Decoder< IType >::Get(), TensorInfo::GetNumDimensions(), armnnUtils::GetNumElementsBetween(), TensorInfo::GetShape(), armnnUtils::GetUnsignedAxis(), IgnoreUnused(), Max, Min, and numeric_cast().
Referenced by TEST_SUITE().
template void armnn::ArgMinMax | ( | Decoder< float > & | in, |
int32_t * | out, | ||
const TensorInfo & | inputTensorInfo, | ||
const TensorInfo & | outputTensorInfo, | ||
ArgMinMaxFunction | function, | ||
int | axis | ||
) |
template void armnn::ArgMinMax | ( | Decoder< float > & | in, |
int64_t * | out, | ||
const TensorInfo & | inputTensorInfo, | ||
const TensorInfo & | outputTensorInfo, | ||
ArgMinMaxFunction | function, | ||
int | axis | ||
) |
class armnn::ARMNN_DEPRECATED_MSG_REMOVAL_DATE | ( | "Use ABI stable IStrategy instead." | , |
"22.05" | |||
) |
Function that an activation layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
activationDescriptor | - ActivationDescriptor to configure the activation. |
name | - Optional name for the layer. |
Function that an addition layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function that an arg min max layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
argMinMaxDescriptor | - ArgMinMaxDescriptor to configure the activation. |
name | - Optional name for the layer. |
Function that a batch normalization layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
mean | - Pre-calculated mean for each channel. |
variance | - Pre-calculated variance for each channel. |
beta | - Per-channel additive factor. |
gamma | - Per-channel multiplicative factor. |
name | - Optional name for the layer. |
Function that a batch to space ND layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
batchToSpaceNdDescriptor | - Description of the layer. |
name | - Optional name for the layer. |
Function a Comparison layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
comparisonDescriptor | - Description of the layer. |
name | - Optional name for the layer. |
Function that a concat layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
concatDescriptor | - ConcatDescriptor (synonym for OriginsDescriptor) to configure the concatenation process. Number of Views must be equal to the number of inputs, and their order must match - e.g. first view corresponds to the first input, second view to the second input, etc.... |
name | - Optional name for the layer. |
Function a layer with no inputs and a single output, which always corresponds to the passed in constant tensor should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
input | - Tensor to be provided as the only output of the layer. The layer will maintain its own copy of the tensor data, meaning the memory referenced by input can be freed or reused after this function is called. |
name | - Optional name for the layer. |
Function that a 2D convolution layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
convolution2dDescriptor | - Description of the 2D convolution layer. |
name | - Optional name for the layer. |
Function that a 2D convolution layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
convolution2dDescriptor | - Description of the 2D convolution layer. |
weights | - Tensor for the weights data. |
biases | - Optional tensor for the bias data. If specified, must match the output tensor shape. |
name | - Optional name for the layer. |
Function a depth to space layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
depthToSpaceDescriptor | - Parameters for the depth to space operation. |
name | - Optional name for the layer. |
Function that a 2D depthwise convolution layer with biases should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
convolution2dDescriptor | - Description of the 2D depthwise convolution layer. |
name | - Optional name for the layer. |
Function that a 2D depthwise convolution layer with biases should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
convolution2dDescriptor | - Description of the 2D depthwise convolution layer. |
weights | - Tensor for the weights. Expected format: [channelMultiplier, inputChannels, height, width]. |
biases | - Optional tensor for the bias data. If specified, must match the output tensor shape. |
name | - Optional name for the layer. |
Function that a Dequantize layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function that a Detection PostProcess layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
descriptor | - Description of the Detection PostProcess layer. |
anchors | - Tensor for the anchors. |
name | - Optional name for the layer. |
Function a division layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function a ElementwiseUnary layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
elementwiseUnaryDescriptor | - Description of the layer. |
name | - Optional name for the layer. |
Function a fill layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
fillDescriptor | - Description of the layer |
name | - Optional name for the layer. |
Function a floor layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function that a fully connected layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
fullyConnectedDescriptor | - Description of the fully connected layer. |
name | - Optional name for the layer. |
Function that a fully connected layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
fullyConnectedDescriptor | - Description of the fully connected layer. |
weights | - Tensor for the weights data. |
biases | - Optional tensor for the bias data. |
name | - Optional name for the layer. |
Function a Gather layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
gatherDescriptor | - Parameters for the gather operation. |
name | - Optional name for the layer. |
Function that an InputLayer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
id | - User generated id to uniquely identify a particular input. The same id needs to be specified when passing the inputs to the IRuntime::EnqueueWorkload() function. |
name | - Optional name for the layer. |
Function that an instance normalization layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
desc | - Parameters for the instance normalization operation. |
name | - Optional name for the layer. |
Function that an L2 normalization layer should call back to when its Accept(ILayerVisitor&) function is invoked. Normalization is performed along dimension 1, but requires a 4d input.
layer | - pointer to the layer which is calling back to this visit function. |
desc | - Parameters for the L2 normalization operation. |
name | - Optional name for the layer. |
Function that a log softmax layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
logSoftmaxDescriptor | - LogSoftmaxDescriptor to configure the log softmax. |
name | - Optional name for the layer. |
Function that a logical binary layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
logicalBinaryDescriptor | - LogicalBinaryDescriptor to configure the logical unary layer. |
name | - Optional name for the layer. |
Function an Lstm layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
descriptor | - Parameters controlling the operation of the Lstm operation. |
params | - The weights and biases for the LSTM cell. |
name | - Optional name for the layer. |
Function a Maximum layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function a Mean layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
meanDescriptor | - Parameters for the mean operation. |
name | - Optional name for the layer. |
Function that a merge layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function a Minimum layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function that a multiplication layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function that a normalization layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
normalizationDescriptor | - NormalizationDescriptor to configure the normalization. |
name | - Optional name for the layer. |
Function an output layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
id | - User generated id to uniquely identify a particular output. The same id needs to be specified when passing the outputs to the IRuntime::EnqueueWorkload() function. |
name | - Optional name for the layer. |
Function a pad layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
paddings | - n by 2 tensor, where n is the rank of the input tensor, such that paddings[i,0] indicates the amount of padding to add in front of dimension i, and paddings[i,1] indicates the amount of padding to add after the end of dimension i |
name | - Optional name for the layer. |
Function that a permute layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
permuteDescriptor | - PermuteDescriptor to configure the permute. |
name | - Optional name for the layer. |
Function that a pooling layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
pooling2dDescriptor | - Pooling2dDescriptor to configure the pooling. |
name | - Optional name for the layer. |
Function that a pooling layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
pooling3dDescriptor | - Pooling3dDescriptor to configure the pooling. |
name | - Optional name for the layer. |
Function that a PReLU activation layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function a quantize layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function a QLstm layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
descriptor | - Parameters controlling the operation of the QLstm operation. |
params | - The weights and biases for the layer |
name | - Optional name for the layer. |
Function a QuantizedLstm layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
params | - The weights and biases for the Quantized LSTM cell |
name | - Optional name for the layer. |
Function a rank layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function that a reduce layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
ReduceDescriptor | - Parameters for the reduce max operation. |
name | - Optional name for the layer. |
Function a reshape layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
reshapeDescriptor | - Parameters for the reshape operation. |
name | - Optional name for the layer. |
Function that a resize layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
resizeDescriptor | - Parameters for the resize operation. |
name | - Optional name for the layer. |
Function that a slice layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
sliceDescriptor | - SliceDescriptor to configure the slice operation. |
name | - Optional name for the layer. |
Function that a softmax layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
softmaxDescriptor | - SoftmaxDescriptor to configure the softmax. |
name | - Optional name for the layer. |
Function a space to batch layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
spaceToBatchNdDescriptor | - Parameters for the space to batch operation. |
name | - Optional name for the layer. |
Function a space to depth layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
spaceToDepthDescriptor | - Parameters for the space to depth operation. |
name | - Optional name for the layer. |
Function that a splitter layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
splitterDescriptor | - ViewsDescriptor to configure the splitting process. Number of Views must be equal to the number of outputs, and their order must match - e.g. first view corresponds to the first output, second view to the second output, etc.... |
name | - Optional name for the layer. |
Function a stack layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
stackDescriptor | - Parameters for the stack operation. |
name | - Optional name for the layer. |
Function a StandInLayer should call back to when its Accept(ILaterVisitor&) function is invoked
layer | - pointer to the layer which is calling back to this visit function. |
standInDescriptor | - Parameters for the stand-in layer. |
name | - Optional name for the layer. |
Function a strided slice layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
stridedSliceDescriptor | - Parameters for the strided slice operation. |
name | - Optional name for the layer. |
Function a subtraction layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function a switch layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
name | - Optional name for the layer. |
Function that a 2D transpose convolution layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
descriptor | - Description of the 2D transpose convolution layer. |
weights | - Tensor for the weights data. |
biases | - Optional tensor for the bias data. |
name | - Optional name for the layer. |
Function that a transpose layer should call back to when its Accept(ILayerVisitor&) function is invoked.
layer | - pointer to the layer which is calling back to this visit function. |
transposeDescriptor | - TransposeDescriptor to configure the transpose. |
name | - Optional name for the layer. |
Definition at line 16 of file ILayerVisitor.hpp.
References ARMNN_DEPRECATED_MSG, and ARMNN_DEPRECATED_MSG_REMOVAL_DATE().
ARMNN_NO_DEPRECATE_WARN_BEGIN struct armnn::ARMNN_DEPRECATED_MSG_REMOVAL_DATE | ( | "ResizeBilinearQueueDescriptor is deprecated use ResizeQueueDescriptor instead" | , |
"22.08" | |||
) |
Definition at line 358 of file WorkloadData.hpp.
References ARMNN_NO_DEPRECATE_WARN_END.
Referenced by IWorkloadFactory::AfterWorkloadsCreated(), ARMNN_DEPRECATED_MSG_REMOVAL_DATE(), RefWorkloadFactory::CreateSubTensorHandle(), MockWorkloadFactory::CreateTensorHandle(), IBackendInternal::GetCapabilities(), NetworkImpl::GetGraph(), OptimizationViews::GetUntouchedSubgraphs(), main(), FullyConnectedDescriptor::operator==(), NeonWorkloadFactory::SupportsSubTensors(), and ClWorkloadFactory::SupportsSubTensors().
void armnn::AssertNumberOfInputSlots | ( | Layer & | layer | ) |
Definition at line 28 of file Layer.cpp.
References ARMNN_ASSERT, Convolution2d, DepthwiseConvolution2d, FullyConnected, Layer::GetNumInputSlots(), and Layer::GetType().
Referenced by InputSlot::Insert().
OptimizationResult AssignBackends | ( | OptimizedNetworkImpl * | optNetObjPtr, |
BackendSettings & | backendSettings, | ||
Graph::Iterator & | firstLayer, | ||
Graph::Iterator & | lastLayer, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 1018 of file Network.cpp.
References ARMNN_SCOPED_PROFILING_EVENT, AssignBackendsIConnectable(), BackendSettings::GetAvailablePreferredBackends(), Input, OptimizationResult::m_Error, ReportError(), and Undefined.
Referenced by ApplyBackendOptimizations(), AssignBackends(), Optimize(), and TEST_SUITE().
OptimizationResult AssignBackends | ( | OptimizedNetworkImpl * | optNetObjPtr, |
BackendSettings & | backendSettings, | ||
SubgraphView::IConnectableLayerIterator & | firstLayer, | ||
SubgraphView::IConnectableLayerIterator & | lastLayer, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 1062 of file Network.cpp.
References ARMNN_SCOPED_PROFILING_EVENT, AssignBackendsIConnectable(), BackendSettings::GetAvailablePreferredBackends(), Input, OptimizationResult::m_Error, ReportError(), and Undefined.
OptimizationResult armnn::AssignBackends | ( | OptimizedNetworkImpl * | optNetObjPtr, |
BackendSettings & | backendSettings, | ||
SubgraphView & | subgraph, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 1106 of file Network.cpp.
References AssignBackends(), SubgraphView::beginIConnectable(), and SubgraphView::endIConnectable().
void armnn::AssignBackendsIConnectable | ( | OptimizedNetworkImpl * | optNetObjPtr, |
IConnectableLayer * | it, | ||
Optional< std::vector< std::string > &> | errMessages, | ||
OptimizationResult & | result, | ||
BackendSettings & | backendSettings, | ||
std::vector< BackendId > & | availablePreferredBackends | ||
) |
Definition at line 905 of file Network.cpp.
References ARMNN_ASSERT_MSG, AttemptBackendAssignment(), CheckScaleSetOnQuantizedType(), Constant, CpuRef, Float32, OptimizedNetworkImpl::GetGraph(), Input, BackendSettings::IsBackendSupported(), BackendSettings::IsCpuRefUsed(), OptimizationResult::IsError(), OptimizationResult::IsOk(), OptimizationResult::IsWarningOnly(), OptimizationResult::m_Error, BackendSettings::m_SelectedBackends, MemCopy, Permute, and ReturnWithError().
Referenced by AssignBackends().
void armnn::AssignSplitId | ( | LayerSelectionInfo::LayerInfoContainer & | layerInfos, |
LayerSelectionInfo & | layerInfo | ||
) |
Definition at line 309 of file SubgraphViewSelector.cpp.
References ForEachLayerInput().
Referenced by SubgraphViewSelector::SelectSubgraphs().
OptimizationResult armnn::AttemptBackendAssignment | ( | BackendSettings & | backendSettings, |
Graph & | graph, | ||
Layer * | layer, | ||
BackendId | backend, | ||
DataType | dataTypeIn, | ||
DataType | dataTypeOut, | ||
const std::vector< BackendId > & | availablePreferredBackends, | ||
std::string & | reasonIfUnsupported, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 654 of file Network.cpp.
References BFloat16, Constant, ConvertBf16ToFp32, FloatingPointConverter::ConvertFloat16To32(), ConvertFp16ToFp32, ConvertFp32ToBf16, ConvertFp32ToFp16, Convolution2d, Float16, Float32, FullyConnected, BackendId::Get(), Layer::GetBackendId(), GetDataTypeName(), Layer::GetInputSlots(), GetLayerTypeAsCString(), Layer::GetOutputSlot(), Layer::GetType(), info, InsertConvertBf16ToFp32LayersBefore(), InsertConvertFp16ToFp32LayersBefore(), InsertConvertFp32ToBf16LayersAfter(), InsertConvertFp32ToFp16LayersAfter(), IWorkloadFactory::IsLayerSupported(), ConstantLayer::m_LayerOutput, ReportWarning(), ReturnWithError(), Layer::SetBackendId(), and OutputSlot::SetTensorInfo().
Referenced by AssignBackendsIConnectable().
BackendRegistry & BackendRegistryInstance | ( | ) |
Definition at line 15 of file BackendRegistry.cpp.
Referenced by InferenceModel< IParser, TDataType >::AddCommandLineOptions(), CreateBackendObject(), CreateSupportedBackends(), DynamicBackendUtils::DeregisterDynamicBackends(), GetCapability(), GetILayerSupportByBackendId(), GetNumberOfCacheFiles(), GetSuitableBackendRegistered(), HasCapability(), ArmNNProfilingServiceInitialiser::InitialiseProfilingService(), IsCapabilitySupported(), main(), LoadedNetwork::MakeLoadedNetwork(), MockBackendInitialiser::MockBackendInitialiser(), MockImportBackendInitialiser::MockImportBackendInitialiser(), ProgramOptions::ProgramOptions(), LoadedNetwork::RegisterDebugCallback(), DynamicBackendUtils::RegisterDynamicBackends(), RuntimeEmptyTestImpl(), RuntimeImpl::RuntimeImpl(), RuntimeInvalidOverridePathTestImpl(), TEST_SUITE(), TestBackendRegistry::TestBackendRegistry(), MockBackendInitialiser::~MockBackendInitialiser(), MockImportBackendInitialiser::~MockImportBackendInitialiser(), RuntimeImpl::~RuntimeImpl(), and TestBackendRegistry::~TestBackendRegistry().
void BatchNormImpl | ( | const BatchNormalizationQueueDescriptor & | data, |
Decoder< float > & | meanDecoder, | ||
Decoder< float > & | varianceDecoder, | ||
Decoder< float > & | betaDecoder, | ||
Decoder< float > & | gammaDecoder, | ||
Decoder< float > & | inputDecoder, | ||
Encoder< float > & | outputEncoder | ||
) |
Definition at line 18 of file BatchNormImpl.cpp.
References Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetIndex(), TensorInfo::GetShape(), GetTensorInfo(), DataLayoutIndexed::GetWidthIndex(), BatchNormalizationDescriptor::m_DataLayout, BatchNormalizationDescriptor::m_Eps, QueueDescriptor::m_Inputs, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and Encoder< IType >::Set().
Referenced by RefBatchNormalizationWorkload::ExecuteAsync().
void BatchToSpaceNd | ( | const DataLayoutIndexed & | dataLayout, |
const TensorInfo & | inputTensorInfo, | ||
const TensorInfo & | outputTensorInfo, | ||
const std::vector< unsigned int > & | blockShape, | ||
const std::vector< std::pair< unsigned int, unsigned int >> & | cropsData, | ||
Decoder< float > & | inputDecoder, | ||
Encoder< float > & | outputEncoder | ||
) |
Definition at line 35 of file BatchToSpaceNd.cpp.
References ARMNN_ASSERT_MSG, BatchToSpaceNd(), Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), TensorShape::GetNumDimensions(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), Offset(), and Encoder< IType >::Set().
Referenced by BatchToSpaceNd(), BatchToSpaceNdLayer::BatchToSpaceNdLayer(), and TEST_SUITE().
int armnn::CalcLevel | ( | const Event * | eventPtr | ) |
Definition at line 246 of file Profiling.cpp.
References Event::GetParentEvent().
Referenced by ProfilerImpl::AnalyzeEventsAndWriteResults(), and ProfilerImpl::PopulateParent().
EdgeStrategy armnn::CalculateEdgeStrategy | ( | BackendsMap & | backends, |
ITensorHandleFactory::FactoryId | srcFactoryId, | ||
const Layer & | layer, | ||
const Layer & | connectedLayer, | ||
TensorHandleFactoryRegistry & | registry, | ||
bool | importEnabled | ||
) |
Definition at line 1512 of file Network.cpp.
References ARMNN_ASSERT_MSG, CopyToTarget, DirectCompatibility, ExportToTarget, FallbackImportDisabled, Layer::GetBackendId(), ITensorHandleFactory::GetCapabilities(), ITensorHandleFactory::GetExportFlags(), TensorHandleFactoryRegistry::GetFactory(), ITensorHandleFactory::GetImportFlags(), Layer::GetType(), ITensorHandleFactory::LegacyFactoryId, Output, PaddingRequired, ITensorHandleFactory::SupportsMapUnmap(), and Undefined.
Referenced by SelectTensorHandleStrategy().
std::map< std::string, unsigned int > CalculateGatherNdKeyIndices | ( | TensorInfo | inputInfo0, |
TensorInfo | inputInfo1 | ||
) |
Calculates the key index values needed for GatherNd: N, ND, K, W, C (N is always 1)
inputInfo0 | - TensorInfo of the corresponding input tensor: params |
inputInfo1 | - TensorInfo of the corresponding input tensor: indices |
Definition at line 300 of file WorkloadUtils.cpp.
References TensorInfo::GetNumDimensions(), and TensorInfo::GetShape().
Referenced by ClGatherNdWorkload::ClGatherNdWorkload(), ClGatherNdWorkloadValidate(), RefGatherNdWorkload::ExecuteAsync(), GatherTensorHandlePairs(), NeonGatherNdWorkload::NeonGatherNdWorkload(), and NeonGatherNdWorkloadValidate().
ITensorHandleFactory::FactoryId armnn::CalculateSlotOption | ( | BackendsMap & | backends, |
OutputSlot & | outputSlot, | ||
TensorHandleFactoryRegistry & | registry, | ||
bool | importEnabled | ||
) |
Definition at line 1362 of file Network.cpp.
References ARMNN_ASSERT_MSG, FallbackImportDisabled, Layer::GetBackendId(), ITensorHandleFactory::GetCapabilities(), OutputSlot::GetConnections(), ITensorHandleFactory::GetExportFlags(), TensorHandleFactoryRegistry::GetFactory(), IBackendInternal::GetHandleFactoryPreferences(), Layer::GetInputSlots(), OutputSlot::GetOwningLayer(), Layer::GetType(), ITensorHandleFactory::LegacyFactoryId, Output, RequiresCopy(), and ITensorHandleFactory::SupportsMapUnmap().
Referenced by SelectTensorHandleStrategy().
ITensorHandleFactory::FactoryId armnn::CalculateSlotOptionForInput | ( | BackendsMap & | backends, |
OutputSlot & | slot, | ||
TensorHandleFactoryRegistry & | registry, | ||
bool | importEnabled | ||
) |
Definition at line 1267 of file Network.cpp.
References ARMNN_ASSERT, ARMNN_ASSERT_MSG, Layer::GetBackendId(), OutputSlot::GetConnections(), TensorHandleFactoryRegistry::GetFactory(), ITensorHandleFactory::GetImportFlags(), OutputSlot::GetOwningLayer(), Layer::GetType(), Input, ITensorHandleFactory::LegacyFactoryId, and ITensorHandleFactory::SupportsMapUnmap().
Referenced by SelectTensorHandleStrategy().
ITensorHandleFactory::FactoryId armnn::CalculateSlotOptionForOutput | ( | BackendsMap & | backends, |
OutputSlot & | slot, | ||
TensorHandleFactoryRegistry & | registry | ||
) |
Definition at line 1352 of file Network.cpp.
References ITensorHandleFactory::DeferredFactoryId, and IgnoreUnused().
Referenced by SelectTensorHandleStrategy().
std::vector<IConnectableLayer*> armnn::ChainReduceLayers | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ReduceDescriptor & | desc | ||
) |
Definition at line 298 of file ArmComputeSubgraphUtils.hpp.
References ARMNN_ASSERT, ComputeReductionTensorShape(), OptimizationViews::GetINetwork(), Layer::GetInputSlot(), Layer::GetOutputSlot(), ReduceDescriptor::m_KeepDims, ReduceDescriptor::m_vAxis, and OutputSlot::SetTensorInfo().
|
inline |
Definition at line 41 of file MemorySources.hpp.
Referenced by LoadedNetwork::FreeWorkingMemory(), LoadedNetwork::ImportInputs(), and LoadedNetwork::ImportOutputs().
void armnn::CheckLayerBindingId | ( | LayerBindingId | visitorId, |
LayerBindingId | id | ||
) |
Definition at line 13 of file TestInputOutputLayerVisitor.hpp.
Referenced by TestInputLayerVisitor::ExecuteStrategy(), and TestOutputLayerVisitor::ExecuteStrategy().
bool armnn::CheckScaleSetOnQuantizedType | ( | Layer * | layer, |
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 595 of file Network.cpp.
References ARMNN_LOG, TensorInfo::GetDataType(), GetLayerTypeAsCString(), Layer::GetNameStr(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), OutputSlot::GetTensorInfo(), Layer::GetType(), info, QAsymmU8, ReportError(), TensorInfo::SetQuantizationOffset(), TensorInfo::SetQuantizationScale(), OutputSlot::SetTensorInfo(), Softmax, and warning.
Referenced by AssignBackendsIConnectable().
bool armnn::CheckSupportRule | ( | F | rule, |
Optional< std::string &> | reasonIfUnsupported, | ||
const char * | reason | ||
) |
Definition at line 38 of file LayerSupportRules.hpp.
References OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().
Referenced by RefLayerSupport::IsActivationSupported(), RefLayerSupport::IsAdditionSupported(), RefLayerSupport::IsArgMinMaxSupported(), RefLayerSupport::IsBatchNormalizationSupported(), RefLayerSupport::IsBatchToSpaceNdSupported(), RefLayerSupport::IsCastSupported(), RefLayerSupport::IsChannelShuffleSupported(), RefLayerSupport::IsComparisonSupported(), RefLayerSupport::IsConcatSupported(), RefLayerSupport::IsConstantSupported(), RefLayerSupport::IsConvertBf16ToFp32Supported(), RefLayerSupport::IsConvertFp32ToBf16Supported(), RefLayerSupport::IsConvolution2dSupported(), RefLayerSupport::IsConvolution3dSupported(), RefLayerSupport::IsDebugSupported(), RefLayerSupport::IsDepthToSpaceSupported(), RefLayerSupport::IsDepthwiseConvolutionSupported(), RefLayerSupport::IsDequantizeSupported(), RefLayerSupport::IsDetectionPostProcessSupported(), RefLayerSupport::IsDivisionSupported(), RefLayerSupport::IsElementwiseUnarySupported(), RefLayerSupport::IsFakeQuantizationSupported(), RefLayerSupport::IsFillSupported(), RefLayerSupport::IsFloorSupported(), RefLayerSupport::IsFullyConnectedSupported(), RefLayerSupport::IsGatherNdSupported(), RefLayerSupport::IsGatherSupported(), RefLayerSupport::IsInstanceNormalizationSupported(), RefLayerSupport::IsL2NormalizationSupported(), RefLayerSupport::IsLogicalBinarySupported(), RefLayerSupport::IsLogSoftmaxSupported(), RefLayerSupport::IsLstmSupported(), RefLayerSupport::IsMaximumSupported(), RefLayerSupport::IsMeanSupported(), RefLayerSupport::IsMemCopySupported(), RefLayerSupport::IsMinimumSupported(), RefLayerSupport::IsMultiplicationSupported(), RefLayerSupport::IsNormalizationSupported(), RefLayerSupport::IsPadSupported(), RefLayerSupport::IsPermuteSupported(), RefLayerSupport::IsPooling2dSupported(), RefLayerSupport::IsPooling3dSupported(), RefLayerSupport::IsPreluSupported(), RefLayerSupport::IsQuantizeSupported(), RefLayerSupport::IsRankSupported(), RefLayerSupport::IsReduceSupported(), RefLayerSupport::IsReshapeSupported(), RefLayerSupport::IsResizeSupported(), RefLayerSupport::IsShapeSupported(), RefLayerSupport::IsSliceSupported(), RefLayerSupport::IsSoftmaxSupported(), RefLayerSupport::IsSpaceToBatchNdSupported(), RefLayerSupport::IsSpaceToDepthSupported(), RefLayerSupport::IsSplitterSupported(), RefLayerSupport::IsStackSupported(), RefLayerSupport::IsStridedSliceSupported(), RefLayerSupport::IsSubtractionSupported(), RefLayerSupport::IsTransposeConvolution2dSupported(), RefLayerSupport::IsTransposeSupported(), and RefLayerSupport::IsUnidirectionalSequenceLstmSupported().
arm_compute::Status ClAbsWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 19 of file ClAbsWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClActivationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ActivationDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClActivationWorkload.cpp.
Referenced by ClLayerSupport::IsActivationSupported().
arm_compute::Status ClAdditionValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 45 of file ClAdditionWorkload.cpp.
Referenced by ClLayerSupport::IsAdditionSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClArgMinMaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ArgMinMaxDescriptor & | descriptor | ||
) |
Definition at line 31 of file ClArgMinMaxWorkload.cpp.
Referenced by ClLayerSupport::IsArgMinMaxSupported().
constexpr const char* armnn::ClBackendId | ( | ) |
Definition at line 10 of file ClBackendId.hpp.
Referenced by ClBackend::GetIdStatic().
arm_compute::Status ClBatchNormalizationValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TensorInfo & | mean, | ||
const TensorInfo & | var, | ||
const TensorInfo & | beta, | ||
const TensorInfo & | gamma, | ||
const BatchNormalizationDescriptor & | descriptor, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 19 of file ClBatchNormalizationFloatWorkload.cpp.
Referenced by ClLayerSupport::IsBatchNormalizationSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClBatchToSpaceNdWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const BatchToSpaceNdDescriptor & | descriptor | ||
) |
Definition at line 57 of file ClBatchToSpaceNdWorkload.cpp.
References BatchToSpaceNdDescriptor::m_DataLayout.
Referenced by ClLayerSupport::IsBatchToSpaceNdSupported().
arm_compute::Status ClCastValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 20 of file ClCastWorkload.cpp.
Referenced by ClLayerSupport::IsCastSupported().
arm_compute::Status ClChannelShuffleValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ChannelShuffleDescriptor & | descriptor | ||
) |
Definition at line 20 of file ClChannelShuffleWorkload.cpp.
Referenced by ClLayerSupport::IsChannelShuffleSupported().
arm_compute::Status ClComparisonWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ComparisonDescriptor & | descriptor | ||
) |
Definition at line 24 of file ClComparisonWorkload.cpp.
Referenced by ClLayerSupport::IsComparisonSupported().
arm_compute::Status ClConcatWorkloadValidate | ( | const std::vector< const TensorInfo *> & | inputs, |
const TensorInfo & | output, | ||
const OriginsDescriptor & | descriptor | ||
) |
Definition at line 27 of file ClConcatWorkload.cpp.
Referenced by ClLayerSupport::IsConcatSupported().
arm_compute::Status ClConstantWorkloadValidate | ( | const TensorInfo & | output | ) |
Definition at line 18 of file ClConstantWorkload.cpp.
Referenced by ClLayerSupport::IsConstantSupported().
|
inline |
Definition at line 152 of file ClContextSchema_generated.h.
References ClContextIdentifier().
|
inline |
Definition at line 167 of file ClContextSchema_generated.h.
|
inline |
Definition at line 148 of file ClContextSchema_generated.h.
Referenced by ClContextBufferHasIdentifier(), FinishClContextBuffer(), FinishSizePrefixedClContextBuffer(), VerifyClContextBuffer(), and VerifySizePrefixedClContextBuffer().
arm_compute::Status ClConvertFp16ToFp32WorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 44 of file ClConvertFp16ToFp32Workload.cpp.
References Float16, Float32, and TensorInfo::GetDataType().
Referenced by ClLayerSupport::IsConvertFp16ToFp32Supported(), and ClConvertFp16ToFp32Workload::SupportsTensorHandleReplacement().
arm_compute::Status ClConvertFp32ToFp16WorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 44 of file ClConvertFp32ToFp16Workload.cpp.
References Float16, Float32, and TensorInfo::GetDataType().
Referenced by ClLayerSupport::IsConvertFp32ToFp16Supported(), and ClConvertFp32ToFp16Workload::SupportsTensorHandleReplacement().
arm_compute::Status ClConvolution2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Convolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
bool | isFastMathEnabled, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 23 of file ClConvolution2dWorkload.cpp.
References TensorInfo::IsConstant().
Referenced by ClLayerSupport::IsConvolution2dSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClConvolution3dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Convolution3dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
bool | isFastMathEnabled, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 23 of file ClConvolution3dWorkload.cpp.
Referenced by ClLayerSupport::IsConvolution3dSupported().
arm_compute::Status ClDepthToSpaceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const DepthToSpaceDescriptor & | descriptor | ||
) |
Definition at line 22 of file ClDepthToSpaceWorkload.cpp.
References SpaceToDepthDescriptor::m_DataLayout.
Referenced by ClLayerSupport::IsDepthToSpaceSupported().
arm_compute::Status ClDepthwiseConvolutionWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const DepthwiseConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 26 of file ClDepthwiseConvolutionWorkload.cpp.
References TensorInfo::IsConstant().
Referenced by ClLayerSupport::IsDepthwiseConvolutionSupported(), ClLayerSupport::IsDilatedDepthwiseConvolutionSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClDequantizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 22 of file ClDequantizeWorkload.cpp.
Referenced by ClLayerSupport::IsDequantizeSupported().
arm_compute::Status ClDivisionWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 18 of file ClDivisionWorkload.cpp.
Referenced by ClLayerSupport::IsDivisionSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClExpWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file ClExpWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClFloorWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 14 of file ClFloorFloatWorkload.cpp.
Referenced by ClLayerSupport::IsFloorSupported().
arm_compute::Status ClFullyConnectedWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
const FullyConnectedDescriptor & | descriptor, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 19 of file ClFullyConnectedWorkload.cpp.
References TensorInfo::IsConstant().
Referenced by ClLayerSupport::IsFullyConnectedSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClGatherNdWorkloadValidate | ( | const TensorInfo & | paramsInfo, |
const TensorInfo & | indicesInfo, | ||
const TensorInfo & | outputInfo | ||
) |
Validate Mul
Validate ReduceSum
Validate Gather
Validate Reshape
Return OK if all the layers are valid
Definition at line 16 of file ClGatherNdWorkload.cpp.
References CalculateGatherNdKeyIndices(), and TensorInfo::SetShape().
Referenced by ClLayerSupport::IsGatherNdSupported().
arm_compute::Status ClGatherWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | indices, | ||
const TensorInfo & | output, | ||
const GatherDescriptor & | descriptor | ||
) |
Definition at line 15 of file ClGatherWorkload.cpp.
Referenced by ClLayerSupport::IsGatherSupported().
constexpr const char* armnn::ClImportTensorHandleFactoryId | ( | ) |
Definition at line 15 of file ClImportTensorHandleFactory.hpp.
Referenced by ClImportTensorHandleFactory::GetIdStatic().
arm_compute::Status ClInstanceNormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const InstanceNormalizationDescriptor & | descriptor | ||
) |
Definition at line 18 of file ClInstanceNormalizationWorkload.cpp.
Referenced by ClLayerSupport::IsInstanceNormalizationSupported().
arm_compute::Status ClL2NormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const L2NormalizationDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClL2NormalizationFloatWorkload.cpp.
Referenced by ClLayerSupport::IsL2NormalizationSupported().
arm_compute::Status ClLogicalAndWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 20 of file ClLogicalAndWorkload.cpp.
Referenced by ClLayerSupport::IsLogicalBinarySupported().
arm_compute::Status ClLogicalNotWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 20 of file ClLogicalNotWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClLogicalOrWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 20 of file ClLogicalOrWorkload.cpp.
Referenced by ClLayerSupport::IsLogicalBinarySupported().
arm_compute::Status ClLogSoftmaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const LogSoftmaxDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClLogSoftmaxWorkload.cpp.
Referenced by ClLayerSupport::IsLogSoftmaxSupported().
arm_compute::Status ClLogWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file ClLogWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClLstmFloatWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | scratchBuffer, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const LstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 244 of file ClLstmFloatWorkload.cpp.
Referenced by ClLayerSupport::IsLstmSupported().
arm_compute::Status ClMaximumWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 24 of file ClMaximumWorkload.cpp.
Referenced by ClLayerSupport::IsMaximumSupported().
arm_compute::Status ClMeanValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const MeanDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClMeanWorkload.cpp.
Referenced by ClLayerSupport::IsMeanSupported().
arm_compute::Status ClMinimumWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 24 of file ClMinimumWorkload.cpp.
Referenced by ClLayerSupport::IsMinimumSupported().
arm_compute::Status ClMultiplicationWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 18 of file ClMultiplicationWorkload.cpp.
Referenced by ClLayerSupport::IsMultiplicationSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClNegWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file ClNegWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClNormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const NormalizationDescriptor & | descriptor | ||
) |
Definition at line 19 of file ClNormalizationFloatWorkload.cpp.
Referenced by ClLayerSupport::IsNormalizationSupported().
arm_compute::Status ClPadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const PadDescriptor & | descriptor | ||
) |
Definition at line 62 of file ClPadWorkload.cpp.
Referenced by ClLayerSupport::IsPadSupported().
arm_compute::Status ClPermuteWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const PermuteDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClPermuteWorkload.cpp.
Referenced by ClLayerSupport::IsPermuteSupported().
arm_compute::Status ClPooling2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Pooling2dDescriptor & | descriptor | ||
) |
Definition at line 18 of file ClPooling2dWorkload.cpp.
Referenced by ClLayerSupport::IsPooling2dSupported().
arm_compute::Status ClPooling3dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Pooling3dDescriptor & | descriptor | ||
) |
Definition at line 18 of file ClPooling3dWorkload.cpp.
Referenced by ClLayerSupport::IsPooling3dSupported().
arm_compute::Status ClPreluWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | alpha, | ||
const TensorInfo & | output | ||
) |
Definition at line 16 of file ClPreluWorkload.cpp.
Referenced by ClLayerSupport::IsPreluSupported().
arm_compute::Status ClQLstmWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | output, | ||
const QLstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 247 of file ClQLstmWorkload.cpp.
Referenced by ClLayerSupport::IsQLstmSupported().
arm_compute::Status ClQuantizedLstmWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | previousCellStateIn, | ||
const TensorInfo & | previousOutputIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const QuantizedLstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 18 of file ClQuantizedLstmWorkload.cpp.
Referenced by ClLayerSupport::IsQuantizedLstmSupported().
arm_compute::Status ClQuantizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 22 of file ClQuantizeWorkload.cpp.
Referenced by ClLayerSupport::IsQuantizeSupported().
arm_compute::Status ClReduceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ReduceDescriptor & | descriptor | ||
) |
Definition at line 18 of file ClReduceWorkload.cpp.
References ReduceDescriptor::m_vAxis.
Referenced by ClLayerSupport::IsReduceSupported().
arm_compute::Status ClReshapeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 15 of file ClReshapeWorkload.cpp.
Referenced by ClLayerSupport::IsReshapeSupported().
arm_compute::Status ClResizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ResizeDescriptor & | descriptor | ||
) |
Definition at line 22 of file ClResizeWorkload.cpp.
Referenced by ClLayerSupport::IsResizeSupported().
arm_compute::Status ClRsqrtWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file ClRsqrtWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClSinWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file ClSinWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClSliceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SliceDescriptor & | descriptor | ||
) |
Definition at line 18 of file ClSliceWorkload.cpp.
Referenced by ClLayerSupport::IsSliceSupported().
arm_compute::Status ClSoftmaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SoftmaxDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClSoftmaxWorkload.cpp.
Referenced by ClLayerSupport::IsSoftmaxSupported().
arm_compute::Status ClSpaceToBatchNdWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SpaceToBatchNdDescriptor & | descriptor | ||
) |
Definition at line 23 of file ClSpaceToBatchNdWorkload.cpp.
Referenced by ClLayerSupport::IsSpaceToBatchNdSupported().
arm_compute::Status ClSpaceToDepthWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SpaceToDepthDescriptor & | descriptor | ||
) |
Definition at line 54 of file ClSpaceToDepthWorkload.cpp.
References SpaceToDepthDescriptor::m_DataLayout.
Referenced by ClLayerSupport::IsSpaceToDepthSupported().
arm_compute::Status ClSplitterWorkloadValidate | ( | const TensorInfo & | input, |
const std::vector< std::reference_wrapper< TensorInfo >> & | outputs, | ||
unsigned int | splitAxis | ||
) |
Definition at line 31 of file ClSplitterWorkload.cpp.
Referenced by ClLayerSupport::IsSplitterSupported().
arm_compute::Status ClSqrtWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 19 of file ClSqrtWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClStackWorkloadValidate | ( | const std::vector< const TensorInfo *> & | inputs, |
const TensorInfo & | output, | ||
const StackDescriptor & | descriptor | ||
) |
Definition at line 29 of file ClStackWorkload.cpp.
Referenced by ClLayerSupport::IsStackSupported().
arm_compute::Status ClStridedSliceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const StridedSliceDescriptor & | descriptor | ||
) |
Definition at line 27 of file ClStridedSliceWorkload.cpp.
Referenced by ClLayerSupport::IsStridedSliceSupported().
arm_compute::Status ClSubtractionValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 46 of file ClSubtractionWorkload.cpp.
Referenced by ClLayerSupport::IsSubtractionSupported(), and ClBackend::OptimizeSubgraphView().
constexpr const char* armnn::ClTensorHandleFactoryId | ( | ) |
Definition at line 15 of file ClTensorHandleFactory.hpp.
Referenced by ClTensorHandleFactory::GetIdStatic().
arm_compute::Status ClTransposeConvolution2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TransposeConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases | ||
) |
Definition at line 26 of file ClTransposeConvolution2dWorkload.cpp.
Referenced by ClLayerSupport::IsTransposeConvolution2dSupported().
arm_compute::Status ClTransposeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TransposeDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClTransposeWorkload.cpp.
Referenced by ClLayerSupport::IsTransposeSupported().
arm_compute::Status ClUnidirectionalSequenceLstmFloatWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | output, | ||
const Optional< TensorInfo > & | hiddenStateOutput, | ||
const Optional< TensorInfo > & | cellStateOutput, | ||
const UnidirectionalSequenceLstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 508 of file ClUnidirectionalSequenceLstmFloatWorkload.cpp.
References TensorInfo::GetShape(), IgnoreUnused(), and LstmDescriptor::m_TimeMajor.
Referenced by ClLayerSupport::IsUnidirectionalSequenceLstmSupported().
MemorySourceFlags armnn::Combine | ( | Arg | sourceA, |
Arg | sourceB | ||
) |
MemorySourceFlags armnn::Combine | ( | Arg | source, |
Args... | rest | ||
) |
|
inline |
Function to convert ArmNN axis (left to right) to ACL axis (right to left) ranging from [-rank, rank)
Definition at line 264 of file ArmComputeUtils.hpp.
References ARMNN_ASSERT, and TensorInfo::GetNumDimensions().
Referenced by ClGatherWorkload::ClGatherWorkload(), ClLogSoftmaxWorkload::ClLogSoftmaxWorkload(), ClSoftmaxWorkload::ClSoftmaxWorkload(), NeonGatherWorkload::NeonGatherWorkload(), NeonLogSoftmaxWorkload::NeonLogSoftmaxWorkload(), and NeonSoftmaxWorkload::NeonSoftmaxWorkload().
|
inline |
Utility function used to setup an arm_compute::Conv3dInfo object from convolution3d descriptor.
Definition at line 293 of file ArmComputeUtils.hpp.
References ConvertActivationDescriptorToAclActivationLayerInfo(), Convolution3dDescriptor::m_DilationX, Convolution3dDescriptor::m_DilationY, Convolution3dDescriptor::m_DilationZ, Convolution3dDescriptor::m_PadBack, Convolution3dDescriptor::m_PadBottom, Convolution3dDescriptor::m_PadFront, Convolution3dDescriptor::m_PadLeft, Convolution3dDescriptor::m_PadRight, Convolution3dDescriptor::m_PadTop, Convolution3dDescriptor::m_StrideX, Convolution3dDescriptor::m_StrideY, and Convolution3dDescriptor::m_StrideZ.
|
inline |
Definition at line 310 of file ArmComputeUtils.hpp.
References ConvertAdditionalInfoToAclActivationLayerInfo(), QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and Convolution3dDescriptor::m_StrideX.
|
inline |
Function to convert axis to its positive equivalent value.
[-rank, rank) –> [0, rank)
Definition at line 280 of file ArmComputeUtils.hpp.
References ARMNN_ASSERT, and TensorInfo::GetNumDimensions().
|
inline |
Function to compute the output tensor shape based on the axes and if keepDims is set.
Definition at line 352 of file ArmComputeUtils.hpp.
References TensorInfo::GetNumDimensions(), and numeric_cast().
Referenced by ChainReduceLayers().
|
inline |
Definition at line 225 of file ArmComputeUtils.hpp.
References ARMNN_ASSERT, TensorInfo::GetNumDimensions(), and SoftmaxDescriptor::m_Axis.
|
inline |
Definition at line 244 of file ArmComputeUtils.hpp.
References ViewsDescriptor::GetNumDimensions(), ViewsDescriptor::GetNumViews(), and ViewsDescriptor::GetViewSizes().
Referenced by ClSplitterWorkload::ClSplitterWorkload(), SplitterLayer::CreateWorkload(), ClLayerSupport::IsSplitterSupported(), NeonLayerSupport::IsSplitterSupported(), and NeonSplitterWorkload::NeonSplitterWorkload().
void Concatenate | ( | const ConcatQueueDescriptor & | data, |
std::vector< ITensorHandle *> | inputs, | ||
std::vector< ITensorHandle *> | outputs | ||
) |
Definition at line 14 of file Concatenate.cpp.
References ARMNN_ASSERT, TensorInfo::GetNumDimensions(), TensorInfo::GetShape(), GetTensorInfo(), ConcatQueueDescriptor::ViewOrigin::m_Origin, ConcatQueueDescriptor::m_ViewOrigins, and MaxNumOfTensorDimensions.
Referenced by RefConcatWorkload::ExecuteAsync().
void armnn::ConditionalThrow | ( | bool | condition, |
const std::string & | message | ||
) |
Definition at line 165 of file Exceptions.hpp.
void armnn::ConditionalThrow | ( | bool | condition | ) |
Definition at line 174 of file Exceptions.hpp.
void armnn::ConditionalThrowIfNotEqual | ( | const std::string & | message, |
const ComparedType & | leftHandSide, | ||
const ComparedType & | rightHandSide | ||
) |
ComparedType must support: operator==(const ComparedType&) operator<<(ostream&, const ComparedType&)
Definition at line 189 of file Exceptions.hpp.
void armnn::ConfigureDetailsObject | ( | JsonChildObject & | detailsObject, |
std::string | layerDetailsStr | ||
) |
Definition at line 295 of file Profiling.cpp.
References ExecObjectDesc, JsonChildObject::SetAndParseDetails(), and JsonChildObject::SetType().
void ConfigureLogging | ( | bool | printToStandardOutput, |
bool | printToDebugOutput, | ||
LogSeverity | severity | ||
) |
Configures the logging behaviour of the ARMNN library.
printToStandardOutput: Set to true if log messages should be printed to the standard output. printToDebugOutput: Set to true if log messages be printed to a platform-specific debug output (where supported). severity: All log messages that are at this severity level or higher will be printed, others will be ignored.
Definition at line 18 of file Utils.cpp.
References SetAllLoggingSinks(), SetLogFilter(), and Trace.
Referenced by ConfigureLoggingTest(), ProfilingServiceRuntimeHelper::ForceTransitionToState(), armnn::test::InferenceTestMain(), and main().
void armnn::ConfigureTuner | ( | arm_compute::CLTuner & | tuner, |
TuningLevel | level | ||
) |
Definition at line 115 of file ClBackendContext.cpp.
References ARMNN_LOG, Exhaustive, info, None, Normal, and Rapid.
Referenced by ClBackendContext::ClBackendContext().
std::tuple< TensorInfo, unsigned int > Convert1HWOTensorInfoToAcl | ( | const TensorInfo & | weightInfo, |
const TensorInfo & | inputInfo, | ||
const DataLayout | dataLayout | ||
) |
Weights for depthwise have a datalayout of [1,H,W,O] = [1,H,W,I*M] This function coverts a TensorInfo from [1,H,W,I*M] to [1,I*M,H,W] (if NCHW) or keeps it at [1,H,W,I*M] (if NHWC) as required by the compute library Returns a tuple of converted weights tensor info and depth multiplier.
Definition at line 170 of file WorkloadUtils.cpp.
References GetDataLayoutName(), TensorInfo::GetShape(), NCHW, NHWC, and armnnUtils::Permuted().
Referenced by GatherTensorHandlePairs().
std::tuple< ConstTensor, unsigned int > Convert1HWOTensorToAcl | ( | const ConstTensorHandle * | weightTensor, |
const TensorInfo & | inputInfo, | ||
const DataLayout | dataLayout, | ||
void * | permuteBuffer | ||
) |
Weights for depthwise have a datalayout of [1,H,W,O] = [1,H,W,I*M] This function coverts a ConstCpuTensorHandle from [1,H,W,I*M] to [1,I*M,H,W] (if NCHW) or keeps it at [1,H,W,I*M] (if NHWC) as required by the compute library.
weightTensor | - ConstTensorHandle of weights tensor |
inputInfo | - TensorInfo of input tensor |
dataLayout | - DataLayout of the input tensor |
permuteBuffer | - Pointer to memory with the size of tensor. Used for the permutation |
Definition at line 139 of file WorkloadUtils.cpp.
References GetDataLayoutName(), TensorInfo::GetShape(), ConstTensorHandle::GetTensorInfo(), NCHW, NHWC, and PermuteTensor().
Referenced by GatherTensorHandlePairs().
std::tuple< ConstTensor, unsigned int > Convert1HWOtoMIHW | ( | const ConstTensorHandle * | weightTensor, |
const TensorInfo & | inputInfo, | ||
const DataLayout & | dataLayout, | ||
void * | permuteBuffer | ||
) |
Converts a (weights) tensor from [1, H, W, I*M] = [1, H, W, O] to [M, I, H, W].
weightTensor | - ConstTensorHandle of the weight tensor that should be converted |
inputInfo | - TensorInfo of the corresponding input tensor |
dataLayout | - DataLayout of the input tensor e.g. NHWC or NCHW |
permuteBuffer | - Memory location with the same size as the weight tensor to write converted data to |
Definition at line 201 of file WorkloadUtils.cpp.
References DataLayoutIndexed::GetChannelsIndex(), TensorInfo::GetShape(), ConstTensorHandle::GetTensorInfo(), TensorInfo::HasPerAxisQuantization(), PermuteTensor(), and TensorInfo::SetShape().
Referenced by GatherTensorHandlePairs().
|
inline |
Definition at line 85 of file ArmComputeUtils.hpp.
References ConvertActivationFunctionToAclActivationFunction(), ActivationDescriptor::m_A, ActivationDescriptor::m_B, and ActivationDescriptor::m_Function.
Referenced by ClActivationWorkload::ClActivationWorkload(), ClSqrtWorkload::ClSqrtWorkload(), ComputeConv3DInfo(), ConvertActivationDescriptorToAclActivationLayerInfo(), ConvertAdditionalInfoToAclActivationLayerInfo(), ConvertFullyConnectedDescriptorToAclFullyConnectedLayerInfo(), NeonActivationWorkload::NeonActivationWorkload(), and NeonSqrtWorkload::NeonSqrtWorkload().
|
inline |
Definition at line 92 of file ArmComputeUtils.hpp.
References ConvertActivationDescriptorToAclActivationLayerInfo().
|
inline |
Definition at line 61 of file ArmComputeUtils.hpp.
References Abs, BoundedReLu, Elu, HardSwish, LeakyReLu, Linear, ReLu, Sigmoid, SoftReLu, Sqrt, Square, and TanH.
Referenced by ConvertActivationDescriptorToAclActivationLayerInfo().
|
inline |
Definition at line 103 of file ArmComputeUtils.hpp.
References ConvertActivationDescriptorToAclActivationLayerInfo(), and QueueDescriptor::GetAdditionalInformation().
Referenced by ClAdditionWorkload::ClAdditionWorkload(), ClDivisionWorkload::ClDivisionWorkload(), ClFullyConnectedWorkload::ClFullyConnectedWorkload(), ClMultiplicationWorkload::ClMultiplicationWorkload(), ClSubtractionWorkload::ClSubtractionWorkload(), ComputeConv3DInfo(), NeonAdditionWorkload::NeonAdditionWorkload(), NeonDivisionWorkload::NeonDivisionWorkload(), NeonMultiplicationWorkload::NeonMultiplicationWorkload(), and NeonSubtractionWorkload::NeonSubtractionWorkload().
LayerT* armnn::ConvertBf16ToFp32Weight | ( | Layer * | l | ) |
Definition at line 631 of file Network.cpp.
References BFloat16, FloatingPointConverter::ConvertBFloat16ToFloat32(), Convolution2d, Float32, FullyConnected, TensorInfo::GetDataType(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), and info.
|
inline |
Definition at line 139 of file ArmComputeUtils.hpp.
References Equal, Greater, GreaterOrEqual, Less, LessOrEqual, ComparisonDescriptor::m_Operation, and NotEqual.
Referenced by ClComparisonWorkload::ClComparisonWorkload(), and NeonComparisonWorkload::NeonComparisonWorkload().
|
inline |
Definition at line 192 of file ArmComputeUtils.hpp.
References ConvertActivationDescriptorToAclActivationLayerInfo(), and FullyConnectedDescriptor::m_TransposeWeightMatrix.
Referenced by ClFullyConnectedWorkload::ClFullyConnectedWorkload().
|
inline |
Definition at line 202 of file ArmComputeUtils.hpp.
References FullyConnectedDescriptor::m_TransposeWeightMatrix.
constexpr LogSeverity armnn::ConvertLogSeverity | ( | BoostLogSeverityMapping | severity | ) |
Definition at line 199 of file Logging.hpp.
|
inline |
Definition at line 116 of file ArmComputeUtils.hpp.
int32_t ConvertMaskToACLFormat | ( | int32_t | mask, |
int32_t | numDim | ||
) |
Definition at line 286 of file WorkloadUtils.cpp.
Referenced by ClStridedSliceWorkload::ClStridedSliceWorkload(), GatherTensorHandlePairs(), and NeonStridedSliceWorkload::NeonStridedSliceWorkload().
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
Definition at line 338 of file ArmComputeUtils.hpp.
References ReduceDescriptor::m_ReduceOperation, Max, Mean, Min, Prod, and Sum.
|
inline |
armnn::ConstTensor ConvertWeightTensorFromArmnnToAcl | ( | const ConstTensorHandle * | weightTensor, |
DataLayout | dataLayout, | ||
void * | permuteBuffer | ||
) |
Definition at line 230 of file WorkloadUtils.cpp.
References ARMNN_ASSERT_MSG, Float16, Float32, BaseTensor< MemoryType >::GetDataType(), BaseTensor< MemoryType >::GetInfo(), TensorInfo::GetShape(), ConstTensorHandle::GetTensorInfo(), NCHW, NHWC, PermuteTensor(), QAsymmS8, QAsymmU8, QSymmS8, and ReshapeWeightsForAcl().
Referenced by GatherTensorHandlePairs().
TensorInfo ConvertWeightTensorInfoFromArmnnToAcl | ( | const TensorInfo & | weightInfo, |
DataLayout | dataLayout | ||
) |
Definition at line 115 of file WorkloadUtils.cpp.
References NHWC, armnnUtils::Permuted(), and ReshapeWeightsForAcl().
Referenced by GatherTensorHandlePairs().
void Convolve | ( | const TensorShape & | rInputShape, |
Decoder< float > & | rInputDecoder, | ||
const TensorShape & | rOutputShape, | ||
Encoder< float > & | rOutputEncoder, | ||
const TensorShape & | rFilterShape, | ||
Decoder< float > & | rFilterDecoder, | ||
bool | biasEnabled, | ||
Decoder< float > * | pBiasDecoder, | ||
DataLayout | dataLayout, | ||
unsigned int | paddingTop, | ||
unsigned int | paddingLeft, | ||
unsigned int | xStride, | ||
unsigned int | yStride, | ||
unsigned int | xDilation, | ||
unsigned int | yDilation, | ||
bool | depthwise | ||
) |
Definition at line 71 of file ConvImpl.cpp.
References Decoder< IType >::DecodeTensor(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDataLayout(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetWidthIndex(), NHWC, and Encoder< IType >::Set().
Referenced by RefDepthwiseConvolution2dWorkload::ExecuteAsync(), and RefConvolution2dWorkload::ExecuteAsync().
void Convolve3d | ( | const TensorShape & | rInputShape, |
Decoder< float > & | rInputDecoder, | ||
const TensorShape & | rOutputShape, | ||
Encoder< float > & | rOutputEncoder, | ||
const TensorShape & | rFilterShape, | ||
Decoder< float > & | rFilterDecoder, | ||
bool | biasEnabled, | ||
Decoder< float > * | pBiasDecoder, | ||
DataLayout | dataLayout, | ||
unsigned int | paddingTop, | ||
unsigned int | paddingLeft, | ||
unsigned int | paddingFront, | ||
unsigned int | xStride, | ||
unsigned int | yStride, | ||
unsigned int | zStride, | ||
unsigned int | xDilation, | ||
unsigned int | yDilation, | ||
unsigned int | zDilation | ||
) |
Definition at line 11 of file Conv3dImpl.cpp.
References Decoder< IType >::DecodeTensor(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDataLayout(), DataLayoutIndexed::GetDepthIndex(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetWidthIndex(), NDHWC, and Encoder< IType >::Set().
Referenced by RefConvolution3dWorkload::ExecuteAsync().
void armnn::CopyArmComputeClTensorData | ( | arm_compute::CLTensor & | dstTensor, |
const T * | srcData | ||
) |
Definition at line 55 of file ClWorkloadUtils.hpp.
References ARMNN_SCOPED_PROFILING_EVENT_CL.
Referenced by ClConstantWorkload::Execute().
void armnn::CopyArmComputeTensorData | ( | arm_compute::Tensor & | dstTensor, |
const T * | srcData | ||
) |
Definition at line 54 of file NeonWorkloadUtils.hpp.
Referenced by InitializeArmComputeTensorData().
void armnn::CopyTensorContentsGeneric | ( | const ITensorHandle * | srcTensor, |
ITensorHandle * | dstTensor, | ||
CopyFunc | copy | ||
) |
Definition at line 46 of file WorkloadUtils.hpp.
References ARMNN_ASSERT, ARMNN_SCOPED_PROFILING_EVENT, TensorShape::GetNumDimensions(), ITensorHandle::GetShape(), ITensorHandle::GetStrides(), IgnoreUnused(), ITensorHandle::Map(), MaxNumOfTensorDimensions, Undefined, and ITensorHandle::Unmap().
Referenced by CopyToOutputTensor(), NeonConvertBf16ToFp32Workload::Execute(), NeonConvertFp32ToBf16Workload::Execute(), NeonConvertFp16ToFp32Workload::Execute(), NeonConvertFp32ToFp16Workload::Execute(), CopyMemGenericWorkload::Execute(), CopyMemGenericWorkload::ExecuteAsync(), and LoadedNetwork::FreeWorkingMemory().
void armnn::CopyToOutputTensor | ( | const Tensor & | outputTensor, |
ITensorHandle * | outputTensorHandle | ||
) |
Definition at line 1294 of file LoadedNetwork.cpp.
References CopyTensorContentsGeneric(), BaseTensor< MemoryType >::GetInfo(), and BaseTensor< MemoryType >::GetMemoryArea().
Referenced by LoadedNetwork::Execute().
|
inline |
Definition at line 28 of file ArmComputeUtils.hpp.
References TensorInfo::GetShape(), and NCHW.
|
inline |
Definition at line 57 of file ClContextSchema_generated.h.
References ClContextBuilder::add_programs(), and ClContextBuilder::Finish().
Referenced by CreateClContextDirect(), and ClContextSerializer::Serialize().
|
inline |
Definition at line 65 of file ClContextSchema_generated.h.
References CreateClContext().
OriginsDescriptor armnn::CreateDescriptorForConcatenation | ( | TensorShapeIt | first, |
TensorShapeIt | last, | ||
unsigned int | concatenationDimension | ||
) |
Convenience template to create an OriginsDescriptor to use when creating a ConcatLayer for performing concatenation of a number of input tensors.
Definition at line 261 of file Descriptors.hpp.
References OriginsDescriptor::SetConcatAxis(), and OriginsDescriptor::SetViewOriginCoord().
Referenced by ConcatDifferentInputOutputQParamTest(), CreateDescriptorForConcat(), and TEST_SUITE().
|
inline |
Definition at line 118 of file ClContextSchema_generated.h.
References ProgramBuilder::add_binary(), ProgramBuilder::add_name(), and ProgramBuilder::Finish().
Referenced by CreateProgramDirect(), and ClContextSerializer::Serialize().
|
inline |
Definition at line 128 of file ClContextSchema_generated.h.
References CreateProgram().
BackendsMap CreateSupportedBackends | ( | TensorHandleFactoryRegistry & | handleFactoryRegistry, |
BackendSettings & | backendSettings | ||
) |
Definition at line 1120 of file Network.cpp.
References ARMNN_ASSERT, BackendRegistryInstance(), and BackendSettings::m_SupportedBackends.
Referenced by Optimize().
void Debug | ( | const TensorInfo & | inputInfo, |
const T * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Definition at line 19 of file Debug.cpp.
References Debug< BFloat16 >(), Debug< float >(), Debug< Half >(), Debug< int16_t >(), Debug< int32_t >(), Debug< int8_t >(), Debug< uint8_t >(), TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), and TensorInfo::GetShape().
Referenced by RefDebugWorkload< DataType >::ExecuteAsync().
template void armnn::Debug< BFloat16 > | ( | const TensorInfo & | inputInfo, |
const BFloat16 * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< float > | ( | const TensorInfo & | inputInfo, |
const float * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< Half > | ( | const TensorInfo & | inputInfo, |
const Half * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< int16_t > | ( | const TensorInfo & | inputInfo, |
const int16_t * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< int32_t > | ( | const TensorInfo & | inputInfo, |
const int32_t * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< int8_t > | ( | const TensorInfo & | inputInfo, |
const int8_t * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< uint8_t > | ( | const TensorInfo & | inputInfo, |
const uint8_t * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
void DepthToSpace | ( | const TensorInfo & | inputInfo, |
const DepthToSpaceDescriptor & | descriptor, | ||
const void * | inputData, | ||
void * | outputData, | ||
unsigned int | dataTypeSize | ||
) |
Definition at line 18 of file DepthToSpace.cpp.
References ARMNN_ASSERT, DepthToSpace(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), TensorShape::GetNumElements(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), SpaceToDepthDescriptor::m_BlockSize, SpaceToDepthDescriptor::m_DataLayout, NCHW, and armnnUtils::Permute().
Referenced by DepthToSpace(), and TEST_SUITE().
void Dequantize | ( | Decoder< float > & | inputDecoder, |
Encoder< float > & | outputEncoder, | ||
const TensorInfo & | inputInfo, | ||
const TensorInfo & | outputInfo | ||
) |
Definition at line 13 of file Dequantize.cpp.
References ARMNN_ASSERT, Decoder< IType >::Get(), TensorInfo::GetNumElements(), IgnoreUnused(), and Encoder< IType >::Set().
std::vector<float> armnn::Dequantize | ( | const T * | quant, |
const TensorInfo & | info | ||
) |
u8 helpers
Definition at line 95 of file RefWorkloadUtils.hpp.
References Dequantize(), TensorInfo::GetNumElements(), TensorInfo::GetQuantizationOffset(), and TensorInfo::GetQuantizationScale().
|
inline |
Definition at line 106 of file RefWorkloadUtils.hpp.
References TensorInfo::GetNumElements(), TensorInfo::GetQuantizationOffset(), and TensorInfo::GetQuantizationScale().
float Dequantize | ( | QuantizedType | value, |
float | scale, | ||
int32_t | offset | ||
) |
Dequantize an 8-bit data type into a floating point data type.
value | - The value to dequantize. |
scale | - The scale (must be non-zero). |
offset | - The offset. |
Definition at line 46 of file TypesUtils.cpp.
References ARMNN_ASSERT.
Referenced by SelectiveQuantizer< T, DoQuantize >::Dequantize(), Dequantize(), TensorPrinter::operator()(), and TEST_SUITE().
void DetectionPostProcess | ( | const TensorInfo & | boxEncodingsInfo, |
const TensorInfo & | scoresInfo, | ||
const TensorInfo & | anchorsInfo, | ||
const TensorInfo & | detectionBoxesInfo, | ||
const TensorInfo & | detectionClassesInfo, | ||
const TensorInfo & | detectionScoresInfo, | ||
const TensorInfo & | numDetectionsInfo, | ||
const DetectionPostProcessDescriptor & | desc, | ||
Decoder< float > & | boxEncodings, | ||
Decoder< float > & | scores, | ||
Decoder< float > & | anchors, | ||
float * | detectionBoxes, | ||
float * | detectionClasses, | ||
float * | detectionScores, | ||
float * | numDetections | ||
) |
Definition at line 140 of file DetectionPostProcess.cpp.
References AllocateOutputData(), ARMNN_ASSERT, GenerateRangeK(), Decoder< IType >::Get(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), IgnoreUnused(), DetectionPostProcessDescriptor::m_DetectionsPerClass, DetectionPostProcessDescriptor::m_MaxClassesPerDetection, DetectionPostProcessDescriptor::m_MaxDetections, DetectionPostProcessDescriptor::m_NmsIouThreshold, DetectionPostProcessDescriptor::m_NmsScoreThreshold, DetectionPostProcessDescriptor::m_NumClasses, DetectionPostProcessDescriptor::m_ScaleH, DetectionPostProcessDescriptor::m_ScaleW, DetectionPostProcessDescriptor::m_ScaleX, DetectionPostProcessDescriptor::m_ScaleY, DetectionPostProcessDescriptor::m_UseRegularNms, NonMaxSuppression(), numeric_cast(), and TopKSort().
Referenced by TEST_SUITE().
void armnn::ExtractJsonObjects | ( | unsigned int | inferenceIndex, |
const Event * | parentEvent, | ||
JsonChildObject & | parentObject, | ||
std::map< const Event *, std::vector< const Event *>> | descendantsMap | ||
) |
Definition at line 303 of file Profiling.cpp.
References JsonChildObject::AddChild(), JsonChildObject::AddMeasurement(), ARMNN_ASSERT, Event, JsonChildObject::GetChild(), Event::GetMeasurements(), Event::GetProfilingGuid(), OptionalBase::has_value(), Measurement, JsonChildObject::NumChildren(), JsonChildObject::SetGuid(), JsonChildObject::SetType(), JsonChildObject::SetUnit(), and OptionalReferenceSwitch< IsReference, T >::value().
Referenced by ProfilerImpl::Print().
void armnn::FakeQuantization | ( | const float * | inputData, |
float * | outputData, | ||
uint32_t | numElements, | ||
float | min, | ||
float | max | ||
) |
Definition at line 17 of file RefFakeQuantizationFloat32Workload.cpp.
References numeric_cast().
Referenced by TEST_SUITE().
bool armnn::FalseFunc | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
bool armnn::FalseFuncF16 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 70 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseFuncF32 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 78 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseFuncI32 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 94 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseFuncU8 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 86 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseInputFuncF16 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 110 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseInputFuncF32 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 102 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseOutputFuncF16 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 126 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseOutputFuncF32 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 118 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
void Fill | ( | Encoder< float > & | output, |
const TensorShape & | desiredOutputShape, | ||
const float | value | ||
) |
Creates a tensor and fills it with a scalar value.
Definition at line 13 of file Fill.cpp.
References TensorShape::GetNumElements(), and Encoder< IType >::Set().
Referenced by TEST_SUITE().
std::vector<Measurement> armnn::FindKernelMeasurements | ( | const Event * | event | ) |
Measurement armnn::FindMeasurement | ( | const std::string & | name, |
const Event * | event | ||
) |
Definition at line 43 of file Profiling.cpp.
References ARMNN_ASSERT, and Event::GetMeasurements().
Referenced by ProfilerImpl::AnalyzeEventSequenceAndWriteResults(), and ProfilerImpl::CalculateProfilingEventStats().
|
inline |
Definition at line 171 of file ClContextSchema_generated.h.
References ClContextIdentifier().
|
inline |
Definition at line 177 of file ClContextSchema_generated.h.
References ClContextIdentifier().
void armnn::ForEachLayerInput | ( | LayerSelectionInfo::LayerInfoContainer & | layerInfos, |
LayerSelectionInfo & | layerInfo, | ||
Delegate | function | ||
) |
Definition at line 267 of file SubgraphViewSelector.cpp.
References ARMNN_ASSERT_MSG, and Layer::GetInputSlots().
Referenced by AssignSplitId(), and IsReadyForSplitAssignment().
void armnn::ForEachLayerOutput | ( | LayerSelectionInfo::LayerInfoContainer & | layerInfos, |
LayerSelectionInfo & | layerInfo, | ||
Delegate | function | ||
) |
Definition at line 288 of file SubgraphViewSelector.cpp.
References Layer::GetOutputSlots().
Referenced by SubgraphViewSelector::SelectSubgraphs().
void FullyConnected | ( | const TensorShape & | rInputShape, |
Decoder< float > & | rInputDecoder, | ||
const TensorShape & | rOutputShape, | ||
Encoder< float > & | rOutputEncoder, | ||
const TensorShape & | rWeightsShape, | ||
Decoder< float > & | rWeightDecoder, | ||
Decoder< float > * | pBiasDecoder, | ||
const bool | biasEnabled, | ||
const unsigned int | K, | ||
const bool | transposeWeights | ||
) |
Performs a matrix multiplication and optionally adds a bias.
Definition at line 15 of file FullyConnected.cpp.
References ARMNN_ASSERT, Decoder< IType >::DecodeTensor(), and Encoder< IType >::Set().
LayerType* armnn::FuseAdditionLayer | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 116 of file ArmComputeSubgraphUtils.hpp.
References FuseLayer(), and OptimizationViews::GetINetwork().
LayerType* armnn::FuseBatchNormalizationLayer | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 192 of file ArmComputeSubgraphUtils.hpp.
References FuseLayer(), and OptimizationViews::GetINetwork().
LayerType* armnn::FuseConvolution2dLayer | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 222 of file ArmComputeSubgraphUtils.hpp.
References FuseLayer(), and OptimizationViews::GetINetwork().
LayerType* armnn::FuseDepthwiseConvolution2dLayer | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 246 of file ArmComputeSubgraphUtils.hpp.
References FuseLayer(), and OptimizationViews::GetINetwork().
LayerType* armnn::FuseDivisionLayer | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 154 of file ArmComputeSubgraphUtils.hpp.
References FuseLayer(), and OptimizationViews::GetINetwork().
LayerType* armnn::FuseFullyConnectedLayer | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 270 of file ArmComputeSubgraphUtils.hpp.
References FuseLayer(), and OptimizationViews::GetINetwork().
LayerType* armnn::FuseLayer | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
LayerType * | replacementLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc | ||
) |
Definition at line 96 of file ArmComputeSubgraphUtils.hpp.
References OptimizationViews::AddSubstitution().
Referenced by FuseAdditionLayer(), FuseBatchNormalizationLayer(), FuseConvolution2dLayer(), FuseDepthwiseConvolution2dLayer(), FuseDivisionLayer(), FuseFullyConnectedLayer(), FuseMultiplicationLayer(), and FuseSubtractionLayer().
LayerType* armnn::FuseMultiplicationLayer | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 173 of file ArmComputeSubgraphUtils.hpp.
References FuseLayer(), and OptimizationViews::GetINetwork().
LayerType* armnn::FuseSubtractionLayer | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 135 of file ArmComputeSubgraphUtils.hpp.
References FuseLayer(), and OptimizationViews::GetINetwork().
void Gather | ( | const TensorInfo & | paramsInfo, |
const TensorInfo & | indicesInfo, | ||
const TensorInfo & | outputInfo, | ||
Decoder< float > & | params, | ||
const int32_t * | indices, | ||
Encoder< float > & | output, | ||
const int32_t | axis | ||
) |
Definition at line 17 of file Gather.cpp.
References ARMNN_ASSERT, Decoder< IType >::Get(), TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), IgnoreUnused(), numeric_cast(), and Encoder< IType >::Set().
Referenced by TEST_SUITE().
void armnn::GatherTensorHandlePairs | ( | const DescriptorType & | descriptor, |
std::vector< std::pair< SrcTensorHandleType *, DstTensorHandleType *>> & | tensorHandlePairs | ||
) |
Definition at line 189 of file WorkloadUtils.hpp.
References CalculateGatherNdKeyIndices(), Convert1HWOTensorInfoToAcl(), Convert1HWOTensorToAcl(), Convert1HWOtoMIHW(), ConvertMaskToACLFormat(), ConvertWeightTensorFromArmnnToAcl(), ConvertWeightTensorInfoFromArmnnToAcl(), PermuteTensor(), and ReshapeWeightsForAcl().
Referenced by CopyMemGenericWorkload::CopyMemGenericWorkload(), CopyMemGenericWorkload::ExecuteAsync(), NeonConvertBf16ToFp32Workload::NeonConvertBf16ToFp32Workload(), NeonConvertFp16ToFp32Workload::NeonConvertFp16ToFp32Workload(), NeonConvertFp32ToBf16Workload::NeonConvertFp32ToBf16Workload(), and NeonConvertFp32ToFp16Workload::NeonConvertFp32ToFp16Workload().
std::vector<unsigned int> armnn::GenerateRangeK | ( | unsigned int | k | ) |
Definition at line 17 of file DetectionPostProcess.cpp.
Referenced by DetectionPostProcess(), and NonMaxSuppression().
constexpr char const* armnn::GetActivationFunctionAsCString | ( | ActivationFunction | activation | ) |
Definition at line 27 of file TypesUtils.hpp.
References Abs, BoundedReLu, Elu, HardSwish, LeakyReLu, Linear, ReLu, Sigmoid, SoftReLu, Sqrt, Square, and TanH.
Referenced by StringifyLayerParameters< ActivationDescriptor >::Serialize().
constexpr char const* armnn::GetArgMinMaxFunctionAsCString | ( | ArgMinMaxFunction | function | ) |
Definition at line 47 of file TypesUtils.hpp.
Definition at line 27 of file WorkloadData.cpp.
References ARMNN_ASSERT_MSG, ARMNN_LOG, BFloat16, CHECK_LOCATION, TensorInfo::GetDataType(), GetDataTypeName(), TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetQuantizationDim(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::GetQuantizationScales(), TensorInfo::GetShape(), OptionalBase::has_value(), TensorInfo::HasMultipleQuantizationScales(), TensorInfo::HasPerAxisQuantization(), info, TensorInfo::IsQuantized(), IsQuantized8BitType(), TensorInfo::IsTypeSpaceMatch(), WorkloadInfo::m_InputTensorInfos, WorkloadInfo::m_OutputTensorInfos, OptionalReferenceSwitch< std::is_reference< T >::value, T >::value(), and warning.
Referenced by CompareDepthwiseConvolution2dTestImpl(), TEST_SUITE(), FullyConnectedQueueDescriptor::Validate(), Convolution2dQueueDescriptor::Validate(), Convolution3dQueueDescriptor::Validate(), DepthwiseConvolution2dQueueDescriptor::Validate(), and TransposeConvolution2dQueueDescriptor::Validate().
|
inline |
Definition at line 14 of file LayerSupportRules.hpp.
References ARMNN_ASSERT_MSG, Float16, Float32, QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, Signed32, and OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().
Referenced by BiasAndWeightsTypesCompatible::BiasAndWeightsTypesCompatible(), BiasAndWeightsTypesMatch::BiasAndWeightsTypesMatch(), and FullyConnectedTest().
Optional< const BackendOptions::BackendOption > GetCapability | ( | const std::string & | backendCapabilityName, |
const BackendCapabilities & | capabilities | ||
) |
Returns a BackendCapability if the backend lists the capability The BackendCapability must then be inspected to check whether or not that BackendCapability is supported Otherwise returns an EmptyOptional if the BackendCapability is unlisted.
Definition at line 30 of file BackendHelper.cpp.
References BackendOptions::GetOption(), and BackendOptions::GetOptionCount().
Referenced by GetCapability(), HasCapability(), LayerSupportHandle::IsConvolution2dSupported(), LayerSupportHandle::IsDepthwiseConvolutionSupported(), LayerSupportHandle::IsDilatedDepthwiseConvolutionSupported(), LayerSupportHandle::IsFullyConnectedSupported(), and TEST_SUITE().
Optional< const BackendOptions::BackendOption > GetCapability | ( | const std::string & | backendCapabilityName, |
const armnn::BackendId & | backend | ||
) |
Returns a BackendCapability if the backend lists the capability The BackendCapability must then be inspected to check whether or not that BackendCapability is supported Otherwise returns an EmptyOptional if the BackendCapability is unlisted.
Definition at line 44 of file BackendHelper.cpp.
References BackendRegistryInstance(), and GetCapability().
|
inline |
Definition at line 140 of file ClContextSchema_generated.h.
Referenced by ClContextDeserializer::DeserializeFromBinary().
constexpr char const* armnn::GetComparisonOperationAsCString | ( | ComparisonOperation | operation | ) |
Definition at line 57 of file TypesUtils.hpp.
References Equal, Greater, GreaterOrEqual, Less, LessOrEqual, and NotEqual.
Referenced by armnnTfLiteParser::ComputeWrappedIndex(), RefComparisonWorkload::ExecuteAsync(), and StringifyLayerParameters< ComparisonDescriptor >::Serialize().
constexpr char const* armnn::GetComputeDeviceAsCString | ( | Compute | compute | ) |
Deprecated function that will be removed together with the Compute enum.
Definition at line 34 of file BackendId.hpp.
References CpuAcc, CpuRef, and GpuAcc.
Referenced by GetSuitableBackendRegistered(), operator<<(), and TEST_SUITE().
|
inline |
Definition at line 37 of file ClWorkloadUtils.hpp.
constexpr const char* armnn::GetDataLayoutName | ( | DataLayout | dataLayout | ) |
Definition at line 222 of file TypesUtils.hpp.
References NCDHW, NCHW, NDHWC, and NHWC.
Referenced by Convert1HWOTensorInfoToAcl(), Convert1HWOTensorToAcl(), MakeTensorShape(), PermuteDepthwiseConv2dWeightsImpl::Run(), StringifyLayerParameters< BatchNormalizationDescriptor >::Serialize(), StringifyLayerParameters< BatchToSpaceNdDescriptor >::Serialize(), StringifyLayerParameters< Convolution2dDescriptor >::Serialize(), StringifyLayerParameters< Convolution3dDescriptor >::Serialize(), StringifyLayerParameters< DepthwiseConvolution2dDescriptor >::Serialize(), StringifyLayerParameters< L2NormalizationDescriptor >::Serialize(), StringifyLayerParameters< NormalizationDescriptor >::Serialize(), StringifyLayerParameters< Pooling2dDescriptor >::Serialize(), StringifyLayerParameters< Pooling3dDescriptor >::Serialize(), StringifyLayerParameters< ResizeDescriptor >::Serialize(), StringifyLayerParameters< SpaceToBatchNdDescriptor >::Serialize(), StringifyLayerParameters< SpaceToDepthDescriptor >::Serialize(), StringifyLayerParameters< StridedSliceDescriptor >::Serialize(), and StringifyLayerParameters< TransposeConvolution2dDescriptor >::Serialize().
constexpr const char* armnn::GetDataTypeName | ( | DataType | dataType | ) |
Definition at line 202 of file TypesUtils.hpp.
References BFloat16, Boolean, Float16, Float32, QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, Signed32, and Signed64.
Referenced by armnnTfLiteParser::AsFloatArray(), AttemptBackendAssignment(), CompareConstTensor(), ProfilingDetails::DetailsExist(), GetBiasDataType(), TfLiteParserImpl::GetBuffer(), RefTransposeWorkload< DataType >::GetName(), RefPermuteWorkload< DataType >::GetName(), RefDebugWorkload< DataType >::GetName(), armnnUtils::GetPerAxisParams(), TEST_SUITE(), LayerVerifierBase::VerifyConstTensors(), LayerVerifierBase::VerifyNameAndConnections(), and VerifyTensorInfoDataType().
constexpr unsigned int armnn::GetDataTypeSize | ( | DataType | dataType | ) |
Definition at line 151 of file TypesUtils.hpp.
References BFloat16, Boolean, Float16, Float32, QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, Signed32, and Signed64.
Referenced by MockTensorHandle::CanBeImported(), RefTensorHandle::CanBeImported(), DepthwiseConvolution2dDepthMul64Test(), RefDepthToSpaceWorkload::ExecuteAsync(), RefStridedSliceWorkload::ExecuteAsync(), RefSliceWorkload::ExecuteAsync(), RefShapeWorkload::ExecuteAsync(), IDeserializer::DeserializerImpl::GetNetworkOutputBindingInfo(), TensorInfo::GetNumBytes(), GetUnpaddedTensorStrides(), PermuteTensor(), ConvertConstPermuteLayersToConstLayers::Run(), and TEST_SUITE().
Definition at line 109 of file Profiling.cpp.
Referenced by ProfilerImpl::AnalyzeEventSequenceAndWriteResults().
Definition at line 110 of file Profiling.cpp.
Graph & GetGraphForTesting | ( | IOptimizedNetwork * | optNet | ) |
Definition at line 49 of file TestUtils.cpp.
References IOptimizedNetwork::pOptimizedNetworkImpl.
Referenced by CheckRelatedLayers(), and TEST_SUITE().
LayerSupportHandle GetILayerSupportByBackendId | ( | const armnn::BackendId & | backend | ) |
Convenience function to retrieve the ILayerSupportHandle for a backend.
Definition at line 16 of file BackendHelper.cpp.
References BackendRegistryInstance(), BackendRegistry::GetFactory(), and BackendRegistry::IsBackendRegistered().
Referenced by TEST_SUITE().
const armnn::ConstTensor armnn::GetInputTensor | ( | const LayerBindingId | layerId, |
const InputTensors & | inputTensors | ||
) |
Definition at line 1309 of file LoadedNetwork.cpp.
const DataType* armnn::GetInputTensorData | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 35 of file RefWorkloadUtils.hpp.
References GetOutputTensorData(), and ITensorHandle::Map().
const BFloat16* armnn::GetInputTensorDataBFloat16 | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 79 of file RefWorkloadUtils.hpp.
const float* armnn::GetInputTensorDataFloat | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 55 of file RefWorkloadUtils.hpp.
const Half* armnn::GetInputTensorDataHalf | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 67 of file RefWorkloadUtils.hpp.
char const * GetLayerTypeAsCString | ( | LayerType | type | ) |
Definition at line 13 of file InternalTypes.cpp.
References ARMNN_ASSERT_MSG, and LIST_OF_LAYER_TYPE.
Referenced by AttemptBackendAssignment(), CheckScaleSetOnQuantizedType(), Connect(), TestInputLayerVisitor::ExecuteStrategy(), TestConvolution2dLayerVisitor::ExecuteStrategy(), StrategyBase< NoThrowStrategy >::ExecuteStrategy(), TestOutputLayerVisitor::ExecuteStrategy(), TestDepthwiseConvolution2dLayerVisitor::ExecuteStrategy(), TestFullyConnectedLayerVistor::ExecuteStrategy(), TestBatchNormalizationLayerVisitor::ExecuteStrategy(), TestConstantLayerVisitor::ExecuteStrategy(), TestLstmLayerVisitor::ExecuteStrategy(), TestQLstmLayerVisitor::ExecuteStrategy(), TestQuantizedLstmLayerVisitor::ExecuteStrategy(), ElementwiseBaseLayer::InferOutputShapes(), Layer::InferOutputShapes(), Graph::InferTensorInfos(), Graph::Print(), ReturnWithError(), Layer::SerializeLayerParameters(), Graph::SerializeToDot(), TEST_SUITE(), ElementwiseBaseLayer::ValidateTensorShapesFromInputs(), ElementwiseUnaryLayer::ValidateTensorShapesFromInputs(), Graph::VerifyConstantLayerSetTensorInfo(), and Layer::VerifyLayerConnections().
constexpr char const* armnn::GetLogicalBinaryOperationAsCString | ( | LogicalBinaryOperation | operation | ) |
Definition at line 87 of file TypesUtils.hpp.
References LogicalAnd, and LogicalOr.
Referenced by RefLogicalBinaryWorkload::ExecuteAsync().
constexpr const char* armnn::GetMemBlockStrategyTypeName | ( | MemBlockStrategyType | memBlockStrategyType | ) |
Definition at line 264 of file TypesUtils.hpp.
References MultiAxisPacking, and SingleAxisPacking.
Referenced by RuntimeImpl::RuntimeImpl().
std::unique_ptr<IMemoryOptimizerStrategy> armnn::GetMemoryOptimizerStrategy | ( | const std::string & | strategyName | ) |
Definition at line 36 of file MemoryOptimizerStrategyLibrary.hpp.
Referenced by main(), RuntimeImpl::RuntimeImpl(), and TEST_SUITE().
const std::vector<std::string> armnn::GetMemoryOptimizerStrategyNames | ( | ) |
Definition at line 47 of file MemoryOptimizerStrategyLibrary.hpp.
Referenced by ParseOptions(), and TEST_SUITE().
ModelOptions & GetModelOptionsForTesting | ( | IOptimizedNetwork * | optNet | ) |
Definition at line 54 of file TestUtils.cpp.
References IOptimizedNetwork::pOptimizedNetworkImpl.
Referenced by CheckRelatedLayers(), and TEST_SUITE().
constexpr const char* armnn::GetNormalizationAlgorithmChannelAsCString | ( | NormalizationAlgorithmChannel | channel | ) |
Definition at line 234 of file TypesUtils.hpp.
References Across, and Within.
Referenced by StringifyLayerParameters< NormalizationDescriptor >::Serialize().
constexpr const char* armnn::GetNormalizationAlgorithmMethodAsCString | ( | NormalizationAlgorithmMethod | method | ) |
Definition at line 244 of file TypesUtils.hpp.
References LocalBrightness, and LocalContrast.
Referenced by StringifyLayerParameters< NormalizationDescriptor >::Serialize().
unsigned int armnn::GetNumActivations | ( | const TensorInfo & | inputInfo | ) |
Definition at line 16 of file RefFullyConnectedWorkload.cpp.
References TensorInfo::GetNumDimensions(), and TensorInfo::GetShape().
unsigned int GetNumberOfCacheFiles | ( | const armnn::BackendId & | backend | ) |
Returns the number of cached files if backend supports caching.
Definition at line 129 of file BackendHelper.cpp.
References BackendRegistryInstance().
uint32_t armnn::GetNumInputs | ( | bool | biasEnabled | ) |
Definition at line 428 of file Descriptors.cpp.
Referenced by FullyConnectedDescriptor::GetNumInputs(), Convolution2dDescriptor::GetNumInputs(), Convolution3dDescriptor::GetNumInputs(), DepthwiseConvolution2dDescriptor::GetNumInputs(), FullyConnectedDescriptor::GetNumViews(), FullyConnectedDescriptor::operator==(), Convolution2dDescriptor::operator==(), Convolution3dDescriptor::operator==(), and DepthwiseConvolution2dDescriptor::operator==().
unsigned int armnn::GetOffset | ( | const TensorShape & | shape, |
unsigned int | b, | ||
unsigned int | h, | ||
unsigned int | w, | ||
unsigned int | c, | ||
const DataLayoutIndexed & | dataLayout | ||
) |
Definition at line 15 of file SpaceToBatchNd.cpp.
References DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDataLayout(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetWidthIndex(), and NHWC.
Referenced by SpaceToBatchNd(), and SpaceToDepth().
constexpr char const* armnn::GetOutputShapeRoundingAsCString | ( | OutputShapeRounding | rounding | ) |
Definition at line 108 of file TypesUtils.hpp.
References Ceiling, and Floor.
Referenced by StringifyLayerParameters< Pooling2dDescriptor >::Serialize(), and StringifyLayerParameters< Pooling3dDescriptor >::Serialize().
const armnn::Tensor armnn::GetOutputTensor | ( | const LayerBindingId | layerId, |
const OutputTensors & | outputTensors | ||
) |
Definition at line 1322 of file LoadedNetwork.cpp.
DataType* armnn::GetOutputTensorData | ( | ITensorHandle * | tensorHandle | ) |
DataType * GetOutputTensorData | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 168 of file ClWorkloadUtils.hpp.
References ITensorHandle::Map().
Referenced by GetInputTensorData(), and SetNeonSliceData().
BFloat16* armnn::GetOutputTensorDataBFloat16 | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 85 of file RefWorkloadUtils.hpp.
float* armnn::GetOutputTensorDataFloat | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 61 of file RefWorkloadUtils.hpp.
Half* armnn::GetOutputTensorDataHalf | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 73 of file RefWorkloadUtils.hpp.
constexpr char const* armnn::GetPaddingMethodAsCString | ( | PaddingMethod | method | ) |
Definition at line 118 of file TypesUtils.hpp.
References Exclude, and IgnoreValue.
Referenced by StringifyLayerParameters< Pooling2dDescriptor >::Serialize(), and StringifyLayerParameters< Pooling3dDescriptor >::Serialize().
constexpr char const* armnn::GetPaddingModeAsCString | ( | PaddingMode | mode | ) |
Definition at line 128 of file TypesUtils.hpp.
References Constant, Reflect, and Symmetric.
Referenced by StringifyLayerParameters< PadDescriptor >::Serialize().
constexpr char const* armnn::GetPoolingAlgorithmAsCString | ( | PoolingAlgorithm | pooling | ) |
Definition at line 97 of file TypesUtils.hpp.
References Average, L2, and Max.
Referenced by StringifyLayerParameters< Pooling2dDescriptor >::Serialize(), and StringifyLayerParameters< Pooling3dDescriptor >::Serialize().
size_t armnn::GetProfilerEventSequenceSize | ( | armnn::IProfiler * | profiler | ) |
Definition at line 19 of file ProfilerTests.cpp.
References ProfilerManager::GetInstance(), ProfilerManager::GetProfiler(), and ProfilerManager::RegisterProfiler().
Referenced by TEST_SUITE().
arm::pipe::IProfilingService & GetProfilingService | ( | armnn::RuntimeImpl * | runtime | ) |
Definition at line 59 of file TestUtils.cpp.
Referenced by CheckRelatedLayers(), TEST_SUITE(), and VerifyPostOptimisationStructureTestImpl().
constexpr char const* armnn::GetReduceOperationAsCString | ( | ReduceOperation | reduce_operation | ) |
Definition at line 139 of file TypesUtils.hpp.
References Max, Mean, Min, Prod, and Sum.
Referenced by StringifyLayerParameters< ReduceDescriptor >::Serialize().
constexpr const char* armnn::GetResizeMethodAsCString | ( | ResizeMethod | method | ) |
Definition at line 254 of file TypesUtils.hpp.
References Bilinear, and NearestNeighbor.
Referenced by StringifyLayerParameters< ResizeDescriptor >::Serialize().
|
inline |
Definition at line 144 of file ClContextSchema_generated.h.
constexpr char const* armnn::GetStatusAsCString | ( | Status | status | ) |
Definition at line 17 of file TypesUtils.hpp.
References Failure, and Success.
Referenced by operator<<().
|
inline |
float32 helpers
Definition at line 26 of file RefWorkloadUtils.hpp.
References RefTensorHandle::GetTensorInfo().
Referenced by BatchNormImpl(), Concatenate(), RefGatherNdWorkload::ExecuteAsync(), RefStridedSliceWorkload::ExecuteAsync(), RefDepthToSpaceWorkload::ExecuteAsync(), RefFakeQuantizationFloat32Workload::ExecuteAsync(), RefFillWorkload::ExecuteAsync(), RefChannelShuffleWorkload::ExecuteAsync(), RefSpaceToDepthWorkload::ExecuteAsync(), RefFloorWorkload::ExecuteAsync(), RefConvertBf16ToFp32Workload::ExecuteAsync(), RefConvertFp16ToFp32Workload::ExecuteAsync(), RefLogSoftmaxWorkload::ExecuteAsync(), RefConvertFp32ToBf16Workload::ExecuteAsync(), RefConvertFp32ToFp16Workload::ExecuteAsync(), RefPadWorkload::ExecuteAsync(), RefActivationWorkload::ExecuteAsync(), RefReshapeWorkload::ExecuteAsync(), RefResizeWorkload::ExecuteAsync(), RefSoftmaxWorkload::ExecuteAsync(), RefSpaceToBatchNdWorkload::ExecuteAsync(), RefDepthwiseConvolution2dWorkload::ExecuteAsync(), RefStackWorkload::ExecuteAsync(), RefInstanceNormalizationWorkload::ExecuteAsync(), RefSliceWorkload::ExecuteAsync(), RefDetectionPostProcessWorkload::ExecuteAsync(), RefDequantizeWorkload::ExecuteAsync(), RefArgMinMaxWorkload::ExecuteAsync(), RefPreluWorkload::ExecuteAsync(), RefQuantizeWorkload::ExecuteAsync(), RefBatchNormalizationWorkload::ExecuteAsync(), RefBatchToSpaceNdWorkload::ExecuteAsync(), RefCastWorkload::ExecuteAsync(), RefL2NormalizationWorkload::ExecuteAsync(), RefNormalizationWorkload::ExecuteAsync(), RefReduceWorkload::ExecuteAsync(), RefLstmWorkload::ExecuteAsync(), RefMeanWorkload::ExecuteAsync(), RefPooling2dWorkload::ExecuteAsync(), RefQLstmWorkload::ExecuteAsync(), RefPooling3dWorkload::ExecuteAsync(), RefConvolution2dWorkload::ExecuteAsync(), RefElementwiseUnaryWorkload::ExecuteAsync(), RefConstantWorkload::ExecuteAsync(), RefLogicalBinaryWorkload::ExecuteAsync(), RefLogicalUnaryWorkload::ExecuteAsync(), RefConvolution3dWorkload::ExecuteAsync(), RefComparisonWorkload::ExecuteAsync(), RefGatherWorkload::ExecuteAsync(), RefShapeWorkload::ExecuteAsync(), RefTransposeConvolution2dWorkload::ExecuteAsync(), RefFullyConnectedWorkload::ExecuteAsync(), RefRankWorkload::ExecuteAsync(), RefUnidirectionalSequenceLstmWorkload::ExecuteAsync(), RefPermuteWorkload< DataType >::ExecuteAsync(), RefTransposeWorkload< DataType >::ExecuteAsync(), RefElementwiseWorkload< Functor, ParentDescriptor, DebugString >::ExecuteAsync(), RefDebugWorkload< DataType >::ExecuteAsync(), OutputSlot::GetNumConnections(), OutputSlot::MoveAllConnections(), RefComparisonWorkload::PostAllocationConfigure(), Split(), Splitter(), SwitchLayer::ValidateTensorShapesFromInputs(), DetectionPostProcessLayer::ValidateTensorShapesFromInputs(), SplitterLayer::ValidateTensorShapesFromInputs(), LstmLayer::ValidateTensorShapesFromInputs(), ConcatLayer::ValidateTensorShapesFromInputs(), QuantizedLstmLayer::ValidateTensorShapesFromInputs(), and QLstmLayer::ValidateTensorShapesFromInputs().
|
inline |
Definition at line 19 of file Timer.hpp.
References GetTimeNow().
Referenced by CheckInferenceTimeThreshold(), RuntimeImpl::EnqueueWorkload(), RuntimeImpl::Execute(), InferenceModel< IParser, TDataType >::InferenceModel(), MainImpl(), InferenceModel< IParser, TDataType >::Run(), InferenceModel< IParser, TDataType >::RunAsync(), RuntimeImpl::RuntimeImpl(), and RuntimeImpl::~RuntimeImpl().
|
inline |
Definition at line 14 of file Timer.hpp.
Referenced by CheckInferenceTimeThreshold(), RuntimeImpl::EnqueueWorkload(), RuntimeImpl::Execute(), GetTimeDuration(), InferenceModel< IParser, TDataType >::InferenceModel(), MainImpl(), InferenceModel< IParser, TDataType >::Run(), InferenceModel< IParser, TDataType >::RunAsync(), RuntimeImpl::RuntimeImpl(), Threadpool::TerminateThreadPool(), and RuntimeImpl::~RuntimeImpl().
constexpr char const* armnn::GetUnaryOperationAsCString | ( | UnaryOperation | operation | ) |
Definition at line 71 of file TypesUtils.hpp.
References Abs, Exp, Log, LogicalNot, Neg, Rsqrt, Sin, and Sqrt.
Referenced by armnnTfLiteParser::ComputeWrappedIndex(), RefLogicalUnaryWorkload::ExecuteAsync(), RefElementwiseUnaryWorkload::ExecuteAsync(), StringifyLayerParameters< ElementwiseUnaryDescriptor >::Serialize(), and TEST_SUITE().
TensorShape GetUnpaddedTensorStrides | ( | const TensorInfo & | tensorInfo | ) |
Definition at line 15 of file TensorHandle.cpp.
References TensorInfo::GetDataType(), GetDataTypeSize(), and TensorInfo::GetShape().
Referenced by MockTensorHandle::GetStrides(), SampleTensorHandle::GetStrides(), RefTensorHandle::GetStrides(), and ConstTensorHandle::GetStrides().
const std::string GetVersion | ( | ) |
Definition at line 77 of file Utils.cpp.
References ARMNN_VERSION.
bool HasCapability | ( | const std::string & | name, |
const BackendCapabilities & | capabilities | ||
) |
Convenience function to check if a capability exists in a BackendCapabilites struct.
Definition at line 58 of file BackendHelper.cpp.
References GetCapability().
Referenced by HasCapability(), LoadedNetwork::ImportInputs(), LoadedNetwork::ImportOutputs(), LoadedNetwork::MakeLoadedNetwork(), RuntimeImpl::RuntimeImpl(), and TEST_SUITE().
bool HasCapability | ( | const std::string & | name, |
const armnn::BackendId & | backend | ||
) |
Convenience function to check if a capability exists in a backend.
Definition at line 63 of file BackendHelper.cpp.
References GetCapability().
bool HasCapability | ( | const BackendOptions::BackendOption & | capability, |
const BackendCapabilities & | capabilities | ||
) |
Convenience function to check if a given capability matches a capability in a BackendCapabilities struct.
Definition at line 68 of file BackendHelper.cpp.
References BackendOptions::Var::AsBool(), BackendOptions::Var::AsFloat(), BackendOptions::Var::AsInt(), BackendOptions::Var::AsString(), BackendOptions::Var::AsUnsignedInt(), BackendOptions::BackendOption::GetName(), BackendOptions::GetOption(), BackendOptions::GetOptionCount(), BackendOptions::BackendOption::GetValue(), BackendOptions::Var::IsBool(), BackendOptions::Var::IsFloat(), BackendOptions::Var::IsInt(), BackendOptions::Var::IsString(), and BackendOptions::Var::IsUnsignedInt().
bool HasCapability | ( | const BackendOptions::BackendOption & | backendOption, |
const armnn::BackendId & | backend | ||
) |
Convenience function to check if a given capability matches a capability in a backend.
Definition at line 100 of file BackendHelper.cpp.
References BackendRegistryInstance(), and HasCapability().
|
inline |
Definition at line 14 of file IgnoreUnused.hpp.
Referenced by ChannelShuffleLayer::Accept(), ConvertFp32ToFp16Layer::Accept(), MapLayer::Accept(), MemCopyLayer::Accept(), MemImportLayer::Accept(), ConvertBf16ToFp32Layer::Accept(), ConvertFp16ToFp32Layer::Accept(), ConvertFp32ToBf16Layer::Accept(), CastLayer::Accept(), DebugLayer::Accept(), UnmapLayer::Accept(), FakeQuantizationLayer::Accept(), GatherNdLayer::Accept(), PreCompiledLayer::Accept(), ShapeLayer::Accept(), Convolution3dLayer::Accept(), UnidirectionalSequenceLstmLayer::Accept(), IInferenceTestCaseProvider::AddCommandLineOptions(), AdditionAfterMaxPoolTest(), AdditionBroadcast1ElementTestImpl(), AdditionBroadcastTestImpl(), ClBackendDefaultAllocator::allocate(), DefaultAllocator::allocate(), ArgMinMax(), BoundedReLuTestCommon(), BoundedReLuUint8UpperAndLowerBoundTest(), CalculateSlotOptionForOutput(), ITensorHandle::CanBeImported(), ClTensorHandle::CanBeImported(), CastTest(), ParserFlatbuffersSerializeFixture::CheckTensors(), ClassifierTestCase< TTestCaseDatabase, TModel >::ClassifierTestCase(), ClContextControl::ClContextControl(), ClConvolution3dWorkload::ClConvolution3dWorkload(), SpaceToBatchNdLayer::Clone(), SpaceToDepthLayer::Clone(), DynamicBackendUtils::CloseHandle(), ClUnidirectionalSequenceLstmFloatWorkloadValidate(), CompareActivationTestImpl(), CompareAdditionTest(), CompareBatchNormTest(), CompareMultiplicationTest(), CompareVector(), ConcatDifferentInputOutputQParamTest(), ConcatTestImpl(), ConcatUint16Test(), ConcatUint8DifferentQParamsTest(), ConcatUint8Test(), ConstantLinearActivationTestCommon(), ConvertBf16ToFp32Test(), ConvertFp32ToBf16Test(), Convolution2d3x3Stride2x2BFloat16SmallValueTest(), Convolution2d3x3Stride2x2BFloat16Test(), CopyTensorContentsGeneric(), MockBackend::CreateBackendProfilingContext(), RefTensorHandleFactory::CreateSubTensorHandle(), SampleDynamicTensorHandleFactory::CreateSubTensorHandle(), SampleDynamicWorkloadFactory::CreateSubTensorHandle(), RefWorkloadFactory::CreateSubTensorHandle(), RefTensorHandleFactory::CreateTensorHandle(), SampleDynamicTensorHandleFactory::CreateTensorHandle(), MockTensorHandleFactory::CreateTensorHandle(), ClWorkloadFactory::CreateTensorHandle(), ITensorHandleFactory::CreateTensorHandle(), RefWorkloadFactory::CreateTensorHandle(), MockWorkloadFactory::CreateTensorHandle(), OutputLayer::CreateTensorHandles(), InputLayer::CreateWorkload(), MemCopyLayer::CreateWorkload(), MemImportLayer::CreateWorkload(), MergeLayer::CreateWorkload(), OutputLayer::CreateWorkload(), UnmapLayer::CreateWorkload(), MapLayer::CreateWorkload(), StandInLayer::CreateWorkload(), IBackendInternal::CreateWorkloadFactory(), QASymm8Decoder::DecodeTensor(), QASymmS8Decoder::DecodeTensor(), QSymmS8Decoder::DecodeTensor(), QSymm16Decoder::DecodeTensor(), BFloat16Decoder::DecodeTensor(), Float16Decoder::DecodeTensor(), Float32Decoder::DecodeTensor(), ScaledInt32Decoder::DecodeTensor(), Int32Decoder::DecodeTensor(), Int32ToInt32tDecoder::DecodeTensor(), BooleanDecoder::DecodeTensor(), BooleanDecoderBool::DecodeTensor(), QSymm8PerAxisDecoder::DecodeTensor(), Dequantize(), SelectiveQuantizer< T, false >::Dequantize(), SelectiveQuantizer< armnn::Half, false >::Dequantize(), SelectiveQuantizer< armnn::BFloat16, false >::Dequantize(), DetectionPostProcess(), DivisionByZeroTest(), ProfilerImpl::EndEvent(), RefStridedSliceWorkload::ExecuteAsync(), SerializerStrategy::ExecuteStrategy(), TestInputLayerVisitor::ExecuteStrategy(), TestConvolution2dLayerVisitor::ExecuteStrategy(), LayerVerifierBase::ExecuteStrategy(), StrategyBase< NoThrowStrategy >::ExecuteStrategy(), MemCopyLayer::ExecuteStrategy(), MemImportLayer::ExecuteStrategy(), FakeQuantizationLayer::ExecuteStrategy(), PreCompiledLayer::ExecuteStrategy(), LayerVerifierBaseWithDescriptor< Descriptor >::ExecuteStrategy(), TestOutputLayerVisitor::ExecuteStrategy(), TestDepthwiseConvolution2dLayerVisitor::ExecuteStrategy(), TestFullyConnectedLayerVistor::ExecuteStrategy(), LayerVerifierBaseWithDescriptorAndConstants< Descriptor >::ExecuteStrategy(), TestBatchNormalizationLayerVisitor::ExecuteStrategy(), TestConstantLayerVisitor::ExecuteStrategy(), TestLstmLayerVisitor::ExecuteStrategy(), TestQLstmLayerVisitor::ExecuteStrategy(), TestQuantizedLstmLayerVisitor::ExecuteStrategy(), ExecutionFrame::ExecuteWorkloads(), exit_capture(), FakeQuantizationTest(), FalseFunc(), FalseFuncF16(), FalseFuncF32(), FalseFuncI32(), FalseFuncU8(), FalseInputFuncF16(), FalseInputFuncF32(), FalseOutputFuncF16(), FalseOutputFuncF32(), Gather(), ClImportTensorHandleFactory::GetCapabilities(), NeonTensorHandleFactory::GetCapabilities(), ITensorHandleFactory::GetCapabilities(), MockCounterDirectory::GetCounter(), MockCounterDirectory::GetCounterSet(), MockCounterDirectory::GetDevice(), DynamicBackendUtils::GetEntryPoint(), armnnSerializer::GetFlatBufferArgMinMaxFunction(), GetImageDataInArmNnLayoutAsNormalizedFloats(), DefaultAllocator::GetMemoryRegionAtOffset(), ClBackendDefaultAllocator::GetMemoryRegionAtOffset(), ICustomAllocator::GetMemoryRegionAtOffset(), IDeserializer::DeserializerImpl::GetNetworkInputBindingInfo(), IDeserializer::DeserializerImpl::GetNetworkOutputBindingInfo(), IDeserializer::DeserializerImpl::GetNormalizationDescriptor(), LoadedNetwork::GetOutputTensorInfo(), IDeserializer::DeserializerImpl::GetPooling2dDescriptor(), IDeserializer::DeserializerImpl::GetPooling3dDescriptor(), MockProfilingConnectionFactory::GetProfilingConnection(), DynamicBackendUtils::GetSharedObjects(), ITensorHandle::Import(), ClTensorHandle::Import(), ShapeLayer::InferOutputShapes(), SliceLayer::InferOutputShapes(), StackLayer::InferOutputShapes(), StandInLayer::InferOutputShapes(), ReshapeLayer::InferOutputShapes(), SplitterLayer::InferOutputShapes(), NeonLayerSupport::IsActivationSupported(), MockImportLayerSupport::IsAdditionSupported(), RefLayerSupport::IsArgMinMaxSupported(), RefLayerSupport::IsBatchNormalizationSupported(), RefLayerSupport::IsBatchToSpaceNdSupported(), RefLayerSupport::IsChannelShuffleSupported(), RefLayerSupport::IsComparisonSupported(), RefLayerSupport::IsConcatSupported(), NeonLayerSupport::IsConvertBf16ToFp32Supported(), NeonLayerSupport::IsConvertFp16ToFp32Supported(), NeonLayerSupport::IsConvertFp32ToBf16Supported(), NeonLayerSupport::IsConvertFp32ToFp16Supported(), RefLayerSupport::IsConvolution2dSupported(), RefLayerSupport::IsConvolution3dSupported(), RefLayerSupport::IsDepthToSpaceSupported(), RefLayerSupport::IsDepthwiseConvolutionSupported(), RefLayerSupport::IsDetectionPostProcessSupported(), RefLayerSupport::IsElementwiseUnarySupported(), RefLayerSupport::IsFakeQuantizationSupported(), ClLayerSupport::IsFillSupported(), NeonLayerSupport::IsFillSupported(), RefLayerSupport::IsFillSupported(), NeonLayerSupport::IsFloorSupported(), RefLayerSupport::IsFloorSupported(), MockImportLayerSupport::IsInputSupported(), RefLayerSupport::IsInstanceNormalizationSupported(), RefLayerSupport::IsL2NormalizationSupported(), ILayerSupport::IsLayerSupported(), ClLayerSupport::IsLogicalBinarySupported(), RefLayerSupport::IsLogicalBinarySupported(), RefLayerSupport::IsLogSoftmaxSupported(), RefLayerSupport::IsLstmSupported(), RefLayerSupport::IsNormalizationSupported(), MockImportLayerSupport::IsOutputSupported(), RefLayerSupport::IsPadSupported(), RefLayerSupport::IsPermuteSupported(), RefLayerSupport::IsPooling2dSupported(), RefLayerSupport::IsPooling3dSupported(), RefLayerSupport::IsQLstmSupported(), RefLayerSupport::IsRankSupported(), RefLayerSupport::IsReduceSupported(), ClLayerSupport::IsReshapeSupported(), NeonLayerSupport::IsReshapeSupported(), RefLayerSupport::IsReshapeSupported(), RefLayerSupport::IsResizeSupported(), RefLayerSupport::IsShapeSupported(), RefLayerSupport::IsSliceSupported(), RefLayerSupport::IsSoftmaxSupported(), RefLayerSupport::IsSpaceToBatchNdSupported(), RefLayerSupport::IsSpaceToDepthSupported(), ClLayerSupport::IsSplitterSupported(), NeonLayerSupport::IsSplitterSupported(), RefLayerSupport::IsSplitterSupported(), RefLayerSupport::IsStackSupported(), RefLayerSupport::IsStridedSliceSupported(), RefLayerSupport::IsTransposeConvolution2dSupported(), RefLayerSupport::IsTransposeSupported(), RefLayerSupport::IsUnidirectionalSequenceLstmSupported(), Layer::Layer(), LogSoftmax(), ClImportTensorHandle::Map(), ClBackend::ClBackendCustomAllocatorMemoryRegion::map(), ClImportSubTensorHandle::Map(), MaximumSimpleTest(), MinimumBroadcast1ElementTest1(), MirrorPad2dTestCommon(), MirrorPad3dTestCommon(), MirrorPad4dTestCommon(), NeonConvolution3dWorkload::NeonConvolution3dWorkload(), DynamicBackendUtils::OpenHandle(), StubCommandHandler::operator()(), TestFunctorA::operator()(), TfLiteParserImpl::OutputShapeOfSqueeze(), Pad2dTestCommon(), Pad3dTestCommon(), Pad4dTestCommon(), PadQAsymmTestCommon(), PermuteInputsForConcat(), PermuteTensorData(), PreluTest(), IInferenceTestCaseProvider::ProcessCommandLineOptions(), YoloTestCase< Model >::ProcessResult(), SelectiveQuantizer< T, false >::Quantize(), SelectiveQuantizer< armnn::Half, false >::Quantize(), SelectiveQuantizer< armnn::BFloat16, false >::Quantize(), RankTest(), TestProfilingConnectionArmnnError::ReadPacket(), TestProfilingConnectionBadAckPacket::ReadPacket(), MockProfilingConnection::ReadPacket(), MockCounterDirectory::RegisterCounter(), BaseWorkload< Convolution2dQueueDescriptor >::ReplaceInputTensorHandle(), BaseWorkload< Convolution2dQueueDescriptor >::ReplaceOutputTensorHandle(), ConvertConstDequantisationLayersToConstLayersImpl::Run(), ConvertConstPermuteLayersToConstLayers::Run(), OptimizeInverseConversionsImpl::Run(), RedirectMembersToConstantInputsImpl::Run(), OptimizeInversePermutesImpl< PermuteType >::Run(), SquashEqualSiblingsImpl< Comparable >::Run(), FuseBatchNorm< ConvLayer, ArmnnType, T >::Run(), ConvertConstants< Converter, Predicate >::Run(), MockSendCounterPacket::SendCounterDirectoryPacket(), MockSendCounterPacket::SendPeriodicCounterCapturePacket(), MockSendCounterPacket::SendPeriodicCounterSelectionPacket(), SetLogFilter(), ClImportTensorHandle::SetMemoryGroup(), ClImportSubTensorHandle::SetMemoryGroup(), ShapeTest(), SimpleActivationTest(), SimpleConvertFp16ToFp32Test(), SimpleConvertFp32ToFp16Test(), SimpleConvolution2d3x3NhwcTestCommon(), SimpleConvolution2d3x3Stride2x2TestCommon(), SimpleConvolution2dNhwcTestImpl(), SimpleConvolution2dTestImpl(), SimpleFillTest(), SimpleFloorTest(), SimplePermuteTestImpl(), SimpleTransposeTestImpl(), Slice(), SqrtNNTest(), OpenClTimer::Start(), MemoryManager::StoreMemToAllocate(), Graph::SubstituteSubgraph(), TEST_SUITE(), TestDynamicBackendId(), TrueFunc(), UnidirectionalSequenceLstmInt8WithCifgWithPeepholeNoProjectionTest(), UnidirectionalSequenceLstmLayerInt8NoCifgWithPeepholeWithProjectionTest(), UnidirectionalSequenceLstmLayerInt8NoCifgWithPeepholeWithProjectionWithLayerNormTest(), UnidirectionalSequenceLstmLayerInt8Test(), UnidirectionalSequenceLstmLayerInt8TimeMajorTest(), UnidirectionalSequenceLstmLayerNoCifgWithPeepholeWithProjectionTest(), UnidirectionalSequenceLstmLayerNoCifgWithPeepholeWithProjectionWithLayerNormTest(), UnidirectionalSequenceLstmWithCifgWithPeepholeNoProjectionTest(), ClBackend::ClBackendCustomAllocatorMemoryRegion::unmap(), ClBackend::UseCustomMemoryAllocator(), IBackendInternal::UseCustomMemoryAllocator(), MockProfilingServiceStatus::WaitForProfilingServiceActivation(), WorkingMemHandle::WorkingMemHandle(), TestProfilingConnectionBase::WritePacket(), Graph::LayerInGraph< InputLayer >::~LayerInGraph(), Graph::LayerInGraph< OutputLayer >::~LayerInGraph(), and ScopedProfilingEvent::~ScopedProfilingEvent().
|
inline |
Definition at line 115 of file ClWorkloadUtils.hpp.
References ARMNN_ASSERT.
|
inline |
Definition at line 60 of file NeonWorkloadUtils.hpp.
References ARMNN_ASSERT, ARMNN_ASSERT_MSG, CopyArmComputeTensorData(), Float16, Float32, ConstTensorHandle::GetConstTensor(), TensorInfo::GetDataType(), ConstTensorHandle::GetTensorInfo(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, and Signed32.
std::vector< ConvertBf16ToFp32Layer * > InsertConvertBf16ToFp32LayersBefore | ( | Graph & | graph, |
Layer & | layer, | ||
bool | expectCorrectInputType | ||
) |
Definition at line 51 of file NetworkUtils.cpp.
References Layer::BeginInputSlots(), BFloat16, Layer::EndInputSlots(), Float32, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumInputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by AttemptBackendAssignment().
std::vector< ConvertFp16ToFp32Layer * > InsertConvertFp16ToFp32LayersBefore | ( | Graph & | graph, |
Layer & | layer, | ||
bool | expectCorrectInputType | ||
) |
Definition at line 138 of file NetworkUtils.cpp.
References Layer::BeginInputSlots(), Layer::EndInputSlots(), Float16, Float32, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumInputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by AttemptBackendAssignment(), ConvertFp32NetworkToFp16Impl::Run(), and TEST_SUITE().
std::vector< ConvertFp32ToBf16Layer * > InsertConvertFp32ToBf16LayersAfter | ( | Graph & | graph, |
Layer & | layer | ||
) |
Definition at line 177 of file NetworkUtils.cpp.
References BFloat16, Float32, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by AttemptBackendAssignment().
std::vector< ConvertFp32ToBf16Layer * > InsertConvertFp32ToBf16LayersBefore | ( | Graph & | graph, |
Layer & | layer, | ||
bool | expectCorrectInputType | ||
) |
Definition at line 90 of file NetworkUtils.cpp.
References Layer::BeginInputSlots(), BFloat16, Convolution2d, DepthwiseConvolution2d, Layer::EndInputSlots(), Float32, FullyConnected, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumInputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Layer::GetType(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by ConvertFp32NetworkToBf16Impl::Run().
std::vector< ConvertFp32ToFp16Layer * > InsertConvertFp32ToFp16LayersAfter | ( | Graph & | graph, |
Layer & | layer | ||
) |
Definition at line 210 of file NetworkUtils.cpp.
References Float16, Float32, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by AttemptBackendAssignment(), ConvertFp32NetworkToFp16Impl::Run(), and TEST_SUITE().
std::vector< DebugLayer * > InsertDebugLayerAfter | ( | Graph & | graph, |
Layer & | layer | ||
) |
Definition at line 243 of file NetworkUtils.cpp.
References ARMNN_ASSERT, Layer::BeginOutputSlots(), CpuRef, Layer::EndOutputSlots(), InputSlot::GetConnectedOutputSlot(), Layer::GetInputSlot(), Layer::GetNameStr(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), Layer::SetBackendId(), and OutputSlot::SetTensorInfo().
Referenced by AddDebugImpl::Run().
void InstanceNorm | ( | const InstanceNormalizationQueueDescriptor & | data, |
const TensorInfo & | inputInfo, | ||
Decoder< float > & | inputDecoder, | ||
Encoder< float > & | outputEncoder | ||
) |
Definition at line 18 of file InstanceNorm.cpp.
References Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetIndex(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), InstanceNormalizationDescriptor::m_Beta, InstanceNormalizationDescriptor::m_DataLayout, InstanceNormalizationDescriptor::m_Eps, InstanceNormalizationDescriptor::m_Gamma, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and Encoder< IType >::Set().
Referenced by RefInstanceNormalizationWorkload::ExecuteAsync().
float IntersectionOverUnion | ( | const float * | boxI, |
const float * | boxJ | ||
) |
Definition at line 30 of file DetectionPostProcess.cpp.
Referenced by NonMaxSuppression(), and TEST_SUITE().
bool armnn::IsActivationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported(), and ILayerSupport::~ILayerSupport().
bool armnn::IsAdditionSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported(), and MockLayerSupport::IsLayerSupported().
bool armnn::IsBatchNormalizationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const TensorInfo & | mean, | ||
const TensorInfo & | var, | ||
const TensorInfo & | beta, | ||
const TensorInfo & | gamma, | ||
const BatchNormalizationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsBatchToSpaceNdSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const BatchToSpaceNdDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsBFloat16 | ( | const WorkloadInfo & | info | ) |
Definition at line 53 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateWorkload().
bool IsCapabilitySupported | ( | const armnn::BackendId & | backend, |
armnn::BackendCapability | capability | ||
) |
Convenience function to check a capability on a backend.
Definition at line 114 of file BackendHelper.cpp.
References ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, and BackendRegistryInstance().
bool armnn::IsConcatSupported | ( | const BackendId & | backend, |
const std::vector< const TensorInfo *> | inputs, | ||
const TensorInfo & | output, | ||
const OriginsDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsConstantSupported | ( | const BackendId & | backend, |
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsConvertFp16ToFp32Supported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsConvertFp32ToFp16Supported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsConvolution2dSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const Convolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported(), and MockLayerSupport::IsLayerSupported().
bool armnn::IsDataType | ( | const WorkloadInfo & | info | ) |
Definition at line 32 of file RefWorkloadFactory.cpp.
References WorkloadInfo::m_InputTensorInfos, and WorkloadInfo::m_OutputTensorInfos.
bool armnn::IsDebugSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsDepthwiseConvolutionSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const DepthwiseConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsDequantizeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsDivisionSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsEqualSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
bool armnn::IsFakeQuantizationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const FakeQuantizationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsFloat16 | ( | const WorkloadInfo & | info | ) |
Definition at line 58 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateWorkload().
bool armnn::IsFloorSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsFullyConnectedSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const TensorInfo & | weights, | ||
const TensorInfo & | biases, | ||
const FullyConnectedDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsGreaterSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
bool armnn::IsInputSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported(), and MockLayerSupport::IsLayerSupported().
bool armnn::IsL2NormalizationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const L2NormalizationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsLayerOptimizable | ( | const armnn::Layer * | layer | ) |
Definition at line 85 of file MockBackend.cpp.
References ARMNN_ASSERT, and Layer::GetName().
Referenced by IsLayerOptimizable(), and MockBackend::OptimizeSubgraphView().
bool armnn::IsLayerOptimizable | ( | const armnn::Layer & | layer | ) |
Definition at line 96 of file MockBackend.cpp.
References IsLayerOptimizable().
bool armnn::IsLayerSupported | ( | const armnn::Layer * | layer | ) |
Definition at line 60 of file MockBackend.cpp.
References Addition, ARMNN_ASSERT, Constant, Convolution2d, Layer::GetType(), Input, and Output.
Referenced by SampleDynamicWorkloadFactory::IsLayerSupported().
bool armnn::IsLayerSupported | ( | const armnn::Layer & | layer | ) |
Definition at line 80 of file MockBackend.cpp.
References IWorkloadFactory::IsLayerSupported().
bool armnn::IsLstmSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | scratchBuffer, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const LstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsMaximumSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnSupported = nullptr , |
||
size_t | reasonIfUnSupportedMaxLength = 0 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsMeanSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const MeanDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsMemCopySupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsMergeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsMinimumSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsMultiplicationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsNormalizationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const NormalizationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
constexpr bool armnn::IsOperationQueueDescriptor | ( | const QueueDescriptorType & | ) |
Definition at line 18 of file RefWorkloadFactory.hpp.
constexpr bool armnn::IsOperationQueueDescriptor | ( | const MemCopyQueueDescriptor & | ) |
Definition at line 21 of file RefWorkloadFactory.hpp.
constexpr bool armnn::IsOperationQueueDescriptor | ( | const ConstantQueueDescriptor & | ) |
Definition at line 24 of file RefWorkloadFactory.hpp.
constexpr bool armnn::IsOperationQueueDescriptor | ( | const PermuteQueueDescriptor & | ) |
Definition at line 27 of file RefWorkloadFactory.hpp.
bool armnn::IsOutputSupported | ( | const BackendId & | backend, |
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported(), and MockLayerSupport::IsLayerSupported().
bool armnn::IsPadSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const PadDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsPermuteSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const PermuteDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsPooling2dSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const Pooling2dDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsPreCompiledSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsPreluSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | alpha, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsQAsymmS8 | ( | const WorkloadInfo & | info | ) |
Definition at line 73 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateWorkload().
bool armnn::IsQAsymmU8 | ( | const WorkloadInfo & | info | ) |
Definition at line 78 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateWorkload().
bool armnn::IsQSymmS16 | ( | const WorkloadInfo & | info | ) |
Definition at line 63 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateWorkload().
bool armnn::IsQSymmS8 | ( | const WorkloadInfo & | info | ) |
Definition at line 68 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateWorkload().
constexpr bool armnn::IsQuantized8BitType | ( | DataType | dataType | ) |
Definition at line 285 of file TypesUtils.hpp.
References QAsymmS8, QAsymmU8, and QSymmS8.
Referenced by GetBiasDataType(), RefLayerSupport::IsConvolution2dSupported(), RefLayerSupport::IsConvolution3dSupported(), RefLayerSupport::IsDepthwiseConvolutionSupported(), IsQuantizedType(), and RefLayerSupport::IsTransposeConvolution2dSupported().
bool armnn::IsQuantizedLstmSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | previousCellStateIn, | ||
const TensorInfo & | previousOutputIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const QuantizedLstmInputParamsInfo & | paramsInfo, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
constexpr bool armnn::IsQuantizedType | ( | ) |
Definition at line 280 of file TypesUtils.hpp.
Referenced by ClMultiplicationWorkload::ClMultiplicationWorkload(), RefWorkloadFactory::CreateWorkload(), TensorInfo::IsQuantized(), NeonMultiplicationWorkload::NeonMultiplicationWorkload(), and QuantizeQueueDescriptor::Validate().
constexpr bool armnn::IsQuantizedType | ( | DataType | dataType | ) |
Definition at line 292 of file TypesUtils.hpp.
References IsQuantized8BitType(), and QSymmS16.
bool armnn::IsReadyForSplitAssignment | ( | LayerSelectionInfo::LayerInfoContainer & | layerInfos, |
LayerSelectionInfo & | layerInfo | ||
) |
Definition at line 374 of file SubgraphViewSelector.cpp.
References ForEachLayerInput().
Referenced by SubgraphViewSelector::SelectSubgraphs().
bool armnn::IsReduceSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const ReduceDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsReshapeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const ReshapeDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsResizeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const ResizeDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsRsqrtSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
bool armnn::IsSigned32 | ( | const WorkloadInfo & | info | ) |
Definition at line 48 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateWorkload().
bool armnn::IsSoftmaxSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const SoftmaxDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsSpaceToBatchNdSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const SpaceToBatchNdDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsSpaceToDepthSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const SpaceToDepthDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsSplitterSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const std::vector< std::reference_wrapper< TensorInfo >> & | outputs, | ||
const ViewsDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsStackSupported | ( | const BackendId & | backend, |
const std::vector< const TensorInfo *> | inputs, | ||
const TensorInfo & | output, | ||
const StackDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsStridedSliceSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const StridedSliceDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsSubtractionSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsSupportedForDataTypeGeneric | ( | Optional< std::string &> | reasonIfUnsupported, |
DataType | dataType, | ||
Float16Func | float16FuncPtr, | ||
Float32Func | float32FuncPtr, | ||
Uint8Func | uint8FuncPtr, | ||
Int32Func | int32FuncPtr, | ||
BooleanFunc | booleanFuncPtr, | ||
Params &&... | params | ||
) |
Definition at line 27 of file LayerSupportCommon.hpp.
References Boolean, Float16, Float32, QAsymmU8, and Signed32.
Referenced by RefLayerSupport::IsConvertFp16ToFp32Supported(), RefLayerSupport::IsConvertFp32ToFp16Supported(), and NeonLayerSupport::IsFloorSupported().
bool armnn::IsSwitchSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output0, | ||
const TensorInfo & | output1, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
bool armnn::IsTransposeConvolution2dSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const TransposeConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by ILayerSupport::IsLayerSupported().
constexpr LayerType armnn::LayerEnumOf | ( | const T * | = nullptr | ) |
constexpr LayerType armnn::LayerEnumOf | ( | const ActivationLayer * | ) |
Definition at line 110 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const AdditionLayer * | ) |
Definition at line 111 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ArgMinMaxLayer * | ) |
Definition at line 112 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const BatchNormalizationLayer * | ) |
Definition at line 113 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const BatchToSpaceNdLayer * | ) |
Definition at line 114 of file LayersFwd.hpp.
Definition at line 115 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ChannelShuffleLayer * | ) |
Definition at line 116 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ComparisonLayer * | ) |
Definition at line 117 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConcatLayer * | ) |
Definition at line 118 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConstantLayer * | ) |
Definition at line 119 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConvertBf16ToFp32Layer * | ) |
Definition at line 120 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConvertFp16ToFp32Layer * | ) |
Definition at line 121 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConvertFp32ToBf16Layer * | ) |
Definition at line 122 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConvertFp32ToFp16Layer * | ) |
Definition at line 123 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const Convolution2dLayer * | ) |
Definition at line 124 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const Convolution3dLayer * | ) |
Definition at line 125 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DebugLayer * | ) |
Definition at line 126 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DepthToSpaceLayer * | ) |
Definition at line 127 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DepthwiseConvolution2dLayer * | ) |
Definition at line 128 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DequantizeLayer * | ) |
Definition at line 129 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DetectionPostProcessLayer * | ) |
Definition at line 130 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DivisionLayer * | ) |
Definition at line 131 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ElementwiseUnaryLayer * | ) |
Definition at line 132 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const FakeQuantizationLayer * | ) |
Definition at line 133 of file LayersFwd.hpp.
Definition at line 134 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const FloorLayer * | ) |
Definition at line 135 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const FullyConnectedLayer * | ) |
Definition at line 136 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const GatherLayer * | ) |
Definition at line 137 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const GatherNdLayer * | ) |
Definition at line 138 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const InputLayer * | ) |
Definition at line 139 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const InstanceNormalizationLayer * | ) |
Definition at line 140 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const L2NormalizationLayer * | ) |
Definition at line 141 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const LogicalBinaryLayer * | ) |
Definition at line 142 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const LogSoftmaxLayer * | ) |
Definition at line 143 of file LayersFwd.hpp.
Definition at line 144 of file LayersFwd.hpp.
Definition at line 145 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MaximumLayer * | ) |
Definition at line 146 of file LayersFwd.hpp.
Definition at line 147 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MemCopyLayer * | ) |
Definition at line 148 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MemImportLayer * | ) |
Definition at line 149 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MergeLayer * | ) |
Definition at line 150 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MinimumLayer * | ) |
Definition at line 151 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MultiplicationLayer * | ) |
Definition at line 152 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const NormalizationLayer * | ) |
Definition at line 153 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const OutputLayer * | ) |
Definition at line 154 of file LayersFwd.hpp.
Definition at line 155 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const PermuteLayer * | ) |
Definition at line 156 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const Pooling2dLayer * | ) |
Definition at line 157 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const Pooling3dLayer * | ) |
Definition at line 158 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const PreCompiledLayer * | ) |
Definition at line 159 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const PreluLayer * | ) |
Definition at line 160 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const QuantizeLayer * | ) |
Definition at line 161 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const QLstmLayer * | ) |
Definition at line 162 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const QuantizedLstmLayer * | ) |
Definition at line 163 of file LayersFwd.hpp.
Definition at line 164 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ReduceLayer * | ) |
Definition at line 165 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ReshapeLayer * | ) |
Definition at line 166 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ResizeLayer * | ) |
Definition at line 167 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ShapeLayer * | ) |
Definition at line 168 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SliceLayer * | ) |
Definition at line 169 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SoftmaxLayer * | ) |
Definition at line 170 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SpaceToBatchNdLayer * | ) |
Definition at line 171 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SpaceToDepthLayer * | ) |
Definition at line 172 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SplitterLayer * | ) |
Definition at line 173 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const StackLayer * | ) |
Definition at line 174 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const StandInLayer * | ) |
Definition at line 175 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const StridedSliceLayer * | ) |
Definition at line 176 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SubtractionLayer * | ) |
Definition at line 177 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SwitchLayer * | ) |
Definition at line 178 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const TransposeLayer * | ) |
Definition at line 179 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const TransposeConvolution2dLayer * | ) |
Definition at line 180 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const UnidirectionalSequenceLstmLayer * | ) |
Definition at line 181 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const UnmapLayer * | ) |
Definition at line 182 of file LayersFwd.hpp.
|
inline |
Definition at line 15 of file Logging.hpp.
References Debug, Error, Fatal, Info, Trace, and Warning.
Referenced by ScopedRecord::ScopedRecord().
void LogSoftmax | ( | Decoder< float > & | input, |
Encoder< float > & | output, | ||
const TensorInfo & | inputInfo, | ||
const LogSoftmaxDescriptor & | descriptor | ||
) |
Definition at line 29 of file LogSoftmax.cpp.
References ARMNN_ASSERT_MSG, Decoder< IType >::Get(), TensorShape::GetNumDimensions(), TensorInfo::GetNumDimensions(), armnnUtils::GetNumElementsBetween(), TensorInfo::GetShape(), IgnoreUnused(), SoftmaxDescriptor::m_Axis, SoftmaxDescriptor::m_Beta, numeric_cast(), and Encoder< IType >::Set().
Referenced by TEST_SUITE().
std::string armnn::LowerString | ( | std::string | value | ) |
Definition at line 62 of file ClBackendContext.cpp.
void LstmImpl | ( | const LstmDescriptor & | descriptor, |
const TensorInfo & | inputInfo, | ||
const TensorInfo & | outputInfo, | ||
const TensorShape & | inputToOutputWeightsShape, | ||
const TensorShape & | recurrentToOutputWeightsShape, | ||
std::unique_ptr< Decoder< float >> & | inputData, | ||
std::unique_ptr< Decoder< float >> & | outputStateIn, | ||
std::unique_ptr< Decoder< float >> & | cellStateIn, | ||
std::unique_ptr< Encoder< float >> & | outputStateOut, | ||
std::unique_ptr< Encoder< float >> & | cellStateOut, | ||
std::unique_ptr< Encoder< float >> & | output, | ||
std::unique_ptr< Decoder< float >> & | cellStateOutDecoder, | ||
std::unique_ptr< Decoder< float >> & | outputDecoder, | ||
std::unique_ptr< Decoder< float >> & | inputToInputWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | inputToForgetWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | inputToCellWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | inputToOutputWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | recurrentToInputWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | recurrentToForgetWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | recurrentToCellWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | recurrentToOutputWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | cellToInputWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | cellToForgetWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | cellToOutputWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | inputGateBiasTensor, | ||
std::unique_ptr< Decoder< float >> & | forgetGateBiasTensor, | ||
std::unique_ptr< Decoder< float >> & | cellBiasTensor, | ||
std::unique_ptr< Decoder< float >> & | outputGateBiasTensor, | ||
std::unique_ptr< Decoder< float >> & | projectionWeightsTensor, | ||
std::unique_ptr< Decoder< float >> & | projectionBiasTensor, | ||
std::unique_ptr< Decoder< float >> & | inputLayerNormWeights, | ||
std::unique_ptr< Decoder< float >> & | forgetLayerNormWeights, | ||
std::unique_ptr< Decoder< float >> & | cellLayerNormWeights, | ||
std::unique_ptr< Decoder< float >> & | outputLayerNormWeights, | ||
std::unique_ptr< Encoder< float >> & | inputGateScratch, | ||
std::unique_ptr< Encoder< float >> & | cellScratch, | ||
std::unique_ptr< Encoder< float >> & | forgetGateScratch, | ||
std::unique_ptr< Encoder< float >> & | outputGateScratch, | ||
std::unique_ptr< Decoder< float >> & | inputGateScratchDecoder, | ||
std::unique_ptr< Decoder< float >> & | cellScratchDecoder, | ||
std::unique_ptr< Decoder< float >> & | forgetGateScratchDecoder, | ||
std::unique_ptr< Decoder< float >> & | outputGateScratchDecoder, | ||
float | layerNormEpsilon | ||
) |
Definition at line 13 of file Lstm.cpp.
References Activation(), ClipVector(), CopyVector(), TensorInfo::GetDataType(), TensorInfo::GetShape(), LstmDescriptor::m_ActivationFunc, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmDescriptor::m_LayerNormEnabled, LstmDescriptor::m_PeepholeEnabled, LstmDescriptor::m_ProjectionEnabled, MatrixBatchVectorMultiplyAccumulate(), MeanStddevNormalization(), SetActivationParameters(), Sigmoid, Sub1Vector(), VectorBatchVectorAdd(), VectorBatchVectorAssign(), VectorBatchVectorCwiseProduct(), VectorBatchVectorCwiseProductAccumulate(), VectorVectorCwiseProduct(), VectorVectorCwiseProductAccumulate(), and ZeroVector().
Referenced by RefLstmWorkload::ExecuteAsync(), and RefUnidirectionalSequenceLstmWorkload::ExecuteAsync().
|
inline |
Definition at line 66 of file Decoders.hpp.
References ARMNN_ASSERT_MSG, BFloat16, Boolean, Float16, Float32, TensorInfo::GetDataType(), armnnUtils::GetPerAxisParams(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::HasPerAxisQuantization(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, and Signed32.
|
inline |
Definition at line 66 of file Decoders.hpp.
References ARMNN_ASSERT_MSG, BFloat16, Boolean, Float16, Float32, TensorInfo::GetDataType(), armnnUtils::GetPerAxisParams(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::HasPerAxisQuantization(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, and Signed32.
|
inline |
Definition at line 136 of file Decoders.hpp.
References ARMNN_ASSERT_MSG, Boolean, and TensorInfo::GetDataType().
|
inline |
Definition at line 154 of file Decoders.hpp.
References ARMNN_ASSERT_MSG, TensorInfo::GetDataType(), and Signed32.
|
inline |
Definition at line 21 of file Encoders.hpp.
References ARMNN_ASSERT_MSG, BFloat16, Boolean, Float16, Float32, TensorInfo::GetDataType(), armnnUtils::GetPerAxisParams(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::HasPerAxisQuantization(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, and Signed32.
|
inline |
Definition at line 21 of file Encoders.hpp.
References ARMNN_ASSERT_MSG, BFloat16, Float16, Float32, TensorInfo::GetDataType(), armnnUtils::GetPerAxisParams(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::HasPerAxisQuantization(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, and Signed32.
|
inline |
Definition at line 90 of file Encoders.hpp.
References ARMNN_ASSERT_MSG, Boolean, and TensorInfo::GetDataType().
|
inline |
Definition at line 108 of file Encoders.hpp.
References ARMNN_ASSERT_MSG, TensorInfo::GetDataType(), and Signed32.
arm_compute::DetectionPostProcessLayerInfo armnn::MakeInfo | ( | const DetectionPostProcessDescriptor & | descriptor | ) |
Definition at line 17 of file NeonDetectionPostProcessWorkload.cpp.
References DetectionPostProcessDescriptor::m_DetectionsPerClass, DetectionPostProcessDescriptor::m_MaxClassesPerDetection, DetectionPostProcessDescriptor::m_MaxDetections, DetectionPostProcessDescriptor::m_NmsIouThreshold, DetectionPostProcessDescriptor::m_NmsScoreThreshold, DetectionPostProcessDescriptor::m_NumClasses, and DetectionPostProcessDescriptor::m_UseRegularNms.
Referenced by NeonDetectionPostProcessValidate().
Optimizer::Optimizations armnn::MakeOptimizations | ( | Args &&... | args | ) |
Definition at line 43 of file Optimizer.hpp.
References Append().
Referenced by ApplyBackendOptimizations(), Optimize(), and TEST_SUITE().
Optional<T> armnn::MakeOptional | ( | Args &&... | args | ) |
Utility template that constructs an object of type T in-place and wraps it inside an Optional<T> object.
Definition at line 305 of file Optional.hpp.
References CONSTRUCT_IN_PLACE.
constexpr TransformIterator<Function, Iterator> armnn::MakeTransformIterator | ( | Iterator | i, |
Function | f | ||
) |
Definition at line 81 of file TransformIterator.hpp.
Referenced by TEST_SUITE().
void MirrorPad | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | outputInfo, | ||
const ITensorHandle * | inputHandle, | ||
ITensorHandle * | outputHandle, | ||
const PadQueueDescriptor & | data | ||
) |
Definition at line 59 of file MirrorPad.cpp.
References TensorShape::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), PadDescriptor::m_PaddingMode, PadDescriptor::m_PadList, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, ITensorHandle::Map(), Reflect, Encoder< IType >::Set(), and Symmetric.
Referenced by RefPadWorkload::ExecuteAsync().
constexpr const char* armnn::MockBackendId | ( | ) |
Definition at line 11 of file MockBackendId.hpp.
Referenced by MockBackend::GetIdStatic(), MockBackend::OptimizeSubgraphView(), and TEST_SUITE().
constexpr const char* armnn::MockImportBackendId | ( | ) |
Definition at line 12 of file MockImportBackend.hpp.
Referenced by MockImportBackend::GetIdStatic(), and TEST_SUITE().
constexpr const char* armnn::MockTensorHandleFactoryId | ( | ) |
Definition at line 14 of file MockTensorHandleFactory.hpp.
Referenced by MockTensorHandleFactory::GetIdStatic().
arm_compute::Status NeonAbsWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonAbsWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonActivationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ActivationDescriptor & | descriptor | ||
) |
Definition at line 17 of file NeonActivationWorkload.cpp.
Referenced by NeonLayerSupport::IsActivationSupported().
arm_compute::Status NeonAdditionWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 20 of file NeonAdditionWorkload.cpp.
Referenced by NeonLayerSupport::IsAdditionSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonArgMinMaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ArgMinMaxDescriptor & | descriptor | ||
) |
Definition at line 31 of file NeonArgMinMaxWorkload.cpp.
Referenced by NeonLayerSupport::IsArgMinMaxSupported().
constexpr const char* armnn::NeonBackendId | ( | ) |
Definition at line 10 of file NeonBackendId.hpp.
Referenced by NeonBackend::GetIdStatic().
arm_compute::Status NeonBatchNormalizationValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TensorInfo & | mean, | ||
const TensorInfo & | var, | ||
const TensorInfo & | beta, | ||
const TensorInfo & | gamma, | ||
const BatchNormalizationDescriptor & | descriptor, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 24 of file NeonBatchNormalizationWorkload.cpp.
Referenced by NeonLayerSupport::IsBatchNormalizationSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonBatchToSpaceNdWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const BatchToSpaceNdDescriptor & | descriptor | ||
) |
Definition at line 20 of file NeonBatchToSpaceNdWorkload.cpp.
Referenced by NeonLayerSupport::IsBatchToSpaceNdSupported().
arm_compute::Status NeonCastValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 19 of file NeonCastWorkload.cpp.
Referenced by NeonLayerSupport::IsCastSupported().
arm_compute::Status NeonChannelShuffleValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ChannelShuffleDescriptor & | descriptor | ||
) |
Definition at line 17 of file NeonChannelShuffleWorkload.cpp.
Referenced by NeonLayerSupport::IsChannelShuffleSupported().
arm_compute::Status NeonComparisonWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ComparisonDescriptor & | descriptor | ||
) |
Definition at line 16 of file NeonComparisonWorkload.cpp.
Referenced by NeonLayerSupport::IsComparisonSupported().
arm_compute::Status NeonConcatWorkloadValidate | ( | const std::vector< const TensorInfo *> & | inputs, |
const TensorInfo & | output, | ||
const OriginsDescriptor & | descriptor | ||
) |
Definition at line 27 of file NeonConcatWorkload.cpp.
Referenced by NeonLayerSupport::IsConcatSupported().
arm_compute::Status NeonConstantWorkloadValidate | ( | const TensorInfo & | output | ) |
Definition at line 20 of file NeonConstantWorkload.cpp.
Referenced by NeonLayerSupport::IsConstantSupported().
arm_compute::Status NeonConvolution2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Convolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
bool | isFastMathEnabled, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 24 of file NeonConvolution2dWorkload.cpp.
References TensorInfo::IsConstant().
Referenced by NeonLayerSupport::IsConvolution2dSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonConvolution3dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Convolution3dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
bool | isFastMathEnabled, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 24 of file NeonConvolution3dWorkload.cpp.
Referenced by NeonLayerSupport::IsConvolution3dSupported().
arm_compute::Status NeonDepthToSpaceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const DepthToSpaceDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonDepthToSpaceWorkload.cpp.
References SpaceToDepthDescriptor::m_DataLayout.
Referenced by NeonLayerSupport::IsDepthToSpaceSupported().
arm_compute::Status NeonDepthwiseConvolutionWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const DepthwiseConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 29 of file NeonDepthwiseConvolutionWorkload.cpp.
References TensorInfo::IsConstant().
Referenced by NeonLayerSupport::IsDepthwiseConvolutionSupported(), NeonLayerSupport::IsDilatedDepthwiseConvolutionSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonDequantizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 22 of file NeonDequantizeWorkload.cpp.
Referenced by NeonLayerSupport::IsDequantizeSupported().
bool NeonDetected | ( | ) |
arm_compute::Status NeonDetectionPostProcessValidate | ( | const TensorInfo & | boxEncodings, |
const TensorInfo & | scores, | ||
const TensorInfo & | anchors, | ||
const TensorInfo & | detectionBoxes, | ||
const TensorInfo & | detectionClasses, | ||
const TensorInfo & | detectionScores, | ||
const TensorInfo & | numDetections, | ||
const DetectionPostProcessDescriptor & | descriptor | ||
) |
Definition at line 32 of file NeonDetectionPostProcessWorkload.cpp.
References info, and MakeInfo().
arm_compute::Status NeonDivisionWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 18 of file NeonDivisionWorkload.cpp.
Referenced by NeonLayerSupport::IsDivisionSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonExpWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonExpWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonFullyConnectedWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
const FullyConnectedDescriptor & | descriptor, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 24 of file NeonFullyConnectedWorkload.cpp.
References TensorInfo::IsConstant().
Referenced by NeonLayerSupport::IsFullyConnectedSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonGatherNdWorkloadValidate | ( | const TensorInfo & | paramsInfo, |
const TensorInfo & | indicesInfo, | ||
const TensorInfo & | outputInfo | ||
) |
Validate Mul
Validate ReduceSum
Validate Gather
Validate Reshape
Return OK if all the layers are valid
Definition at line 14 of file NeonGatherNdWorkload.cpp.
References CalculateGatherNdKeyIndices(), and TensorInfo::SetShape().
Referenced by NeonLayerSupport::IsGatherNdSupported().
arm_compute::Status NeonGatherWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | indices, | ||
const TensorInfo & | output, | ||
const GatherDescriptor & | descriptor | ||
) |
Definition at line 13 of file NeonGatherWorkload.cpp.
Referenced by NeonLayerSupport::IsGatherSupported().
arm_compute::Status NeonInstanceNormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const InstanceNormalizationDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonInstanceNormalizationWorkload.cpp.
Referenced by NeonLayerSupport::IsInstanceNormalizationSupported().
arm_compute::Status NeonL2NormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const L2NormalizationDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonL2NormalizationFloatWorkload.cpp.
Referenced by NeonLayerSupport::IsL2NormalizationSupported().
arm_compute::Status NeonLogicalAndWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 18 of file NeonLogicalAndWorkload.cpp.
Referenced by NeonLayerSupport::IsLogicalBinarySupported().
arm_compute::Status NeonLogicalNotWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 19 of file NeonLogicalNotWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonLogicalOrWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 18 of file NeonLogicalOrWorkload.cpp.
Referenced by NeonLayerSupport::IsLogicalBinarySupported().
arm_compute::Status NeonLogSoftmaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const LogSoftmaxDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonLogSoftmaxWorkload.cpp.
Referenced by NeonLayerSupport::IsLogSoftmaxSupported().
arm_compute::Status NeonLogWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonLogWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonLstmFloatWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | scratchBuffer, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const LstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 253 of file NeonLstmFloatWorkload.cpp.
Referenced by NeonLayerSupport::IsLstmSupported().
arm_compute::Status NeonMaximumWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 14 of file NeonMaximumWorkload.cpp.
Referenced by NeonLayerSupport::IsMaximumSupported().
arm_compute::Status NeonMeanWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const MeanDescriptor & | descriptor | ||
) |
Definition at line 18 of file NeonMeanWorkload.cpp.
Referenced by NeonLayerSupport::IsMeanSupported().
arm_compute::Status NeonMinimumWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Validate function for validating the inputs and output.
[in] | input0 | The input0 value to be validated. |
[in] | input1 | The input1 value to be validated. |
[in] | output | The output value to be validated. |
Definition at line 15 of file NeonMinimumWorkload.cpp.
Referenced by NeonLayerSupport::IsMinimumSupported().
arm_compute::Status NeonMultiplicationWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 19 of file NeonMultiplicationWorkload.cpp.
Referenced by NeonLayerSupport::IsMultiplicationSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonNegWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonNegWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonNormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const NormalizationDescriptor & | descriptor | ||
) |
Definition at line 49 of file NeonNormalizationFloatWorkload.cpp.
Referenced by NeonLayerSupport::IsNormalizationSupported().
arm_compute::Status NeonPadWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const PadDescriptor & | descriptor | ||
) |
Definition at line 59 of file NeonPadWorkload.cpp.
Referenced by NeonLayerSupport::IsPadSupported().
arm_compute::Status NeonPermuteWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const PermuteDescriptor & | descriptor | ||
) |
Definition at line 15 of file NeonPermuteWorkload.cpp.
Referenced by NeonLayerSupport::IsPermuteSupported().
arm_compute::Status NeonPooling2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Pooling2dDescriptor & | descriptor | ||
) |
Definition at line 22 of file NeonPooling2dWorkload.cpp.
Referenced by NeonLayerSupport::IsPooling2dSupported().
arm_compute::Status NeonPooling3dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Pooling3dDescriptor & | descriptor | ||
) |
Definition at line 15 of file NeonPooling3dWorkload.cpp.
Referenced by NeonLayerSupport::IsPooling3dSupported().
arm_compute::Status NeonPreluWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | alpha, | ||
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonPreluWorkload.cpp.
Referenced by NeonLayerSupport::IsPreluSupported().
arm_compute::Status NeonQLstmWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | output, | ||
const QLstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 243 of file NeonQLstmWorkload.cpp.
Referenced by NeonLayerSupport::IsQLstmSupported().
arm_compute::Status NeonQuantizedLstmWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | outputStateOut, | ||
const QuantizedLstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 131 of file NeonQuantizedLstmWorkload.cpp.
Referenced by NeonLayerSupport::IsQuantizedLstmSupported().
arm_compute::Status NeonQuantizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file NeonQuantizeWorkload.cpp.
Referenced by NeonLayerSupport::IsQuantizeSupported().
arm_compute::Status NeonReduceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ReduceDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonReduceWorkload.cpp.
References ReduceDescriptor::m_vAxis.
Referenced by NeonLayerSupport::IsReduceSupported().
arm_compute::Status NeonReshapeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonReshapeWorkload.cpp.
Referenced by NeonLayerSupport::IsReshapeSupported().
arm_compute::Status NeonResizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ResizeDescriptor & | descriptor | ||
) |
Definition at line 22 of file NeonResizeWorkload.cpp.
Referenced by NeonLayerSupport::IsResizeSupported().
arm_compute::Status NeonRsqrtWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file NeonRsqrtWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonSinWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonSinWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonSliceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SliceDescriptor & | descriptor | ||
) |
Definition at line 21 of file NeonSliceWorkload.cpp.
Referenced by NeonLayerSupport::IsSliceSupported().
arm_compute::Status NeonSoftmaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SoftmaxDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonSoftmaxWorkload.cpp.
Referenced by NeonLayerSupport::IsSoftmaxSupported().
arm_compute::Status NeonSpaceToBatchNdWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SpaceToBatchNdDescriptor & | descriptor | ||
) |
Definition at line 20 of file NeonSpaceToBatchNdWorkload.cpp.
Referenced by NeonLayerSupport::IsSpaceToBatchNdSupported().
arm_compute::Status NeonSpaceToDepthWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SpaceToDepthDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonSpaceToDepthWorkload.cpp.
References SpaceToDepthDescriptor::m_DataLayout.
Referenced by NeonLayerSupport::IsSpaceToDepthSupported().
arm_compute::Status NeonSplitterWorkloadValidate | ( | const TensorInfo & | input, |
const std::vector< std::reference_wrapper< TensorInfo >> & | outputs, | ||
unsigned int | splitAxis | ||
) |
Definition at line 32 of file NeonSplitterWorkload.cpp.
Referenced by NeonLayerSupport::IsSplitterSupported().
arm_compute::Status NeonSqrtWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file NeonSqrtWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonStackWorkloadValidate | ( | const std::vector< const TensorInfo *> & | inputs, |
const TensorInfo & | output, | ||
const StackDescriptor & | descriptor | ||
) |
Definition at line 27 of file NeonStackWorkload.cpp.
Referenced by NeonLayerSupport::IsStackSupported().
arm_compute::Status NeonStridedSliceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const StridedSliceDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonStridedSliceWorkload.cpp.
Referenced by NeonLayerSupport::IsStridedSliceSupported().
arm_compute::Status NeonSubtractionWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 22 of file NeonSubtractionWorkload.cpp.
Referenced by NeonLayerSupport::IsSubtractionSupported(), and NeonBackend::OptimizeSubgraphView().
constexpr const char* armnn::NeonTensorHandleFactoryId | ( | ) |
Definition at line 14 of file NeonTensorHandleFactory.hpp.
Referenced by NeonTensorHandleFactory::GetIdStatic().
arm_compute::Status NeonTransposeConvolution2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TransposeConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases | ||
) |
Definition at line 25 of file NeonTransposeConvolution2dWorkload.cpp.
Referenced by NeonLayerSupport::IsTransposeConvolution2dSupported().
arm_compute::Status NeonTransposeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TransposeDescriptor & | descriptor | ||
) |
Definition at line 15 of file NeonTransposeWorkload.cpp.
Referenced by NeonLayerSupport::IsTransposeSupported().
arm_compute::Status NeonUnidirectionalSequenceLstmFloatWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const UnidirectionalSequenceLstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 510 of file NeonUnidirectionalSequenceLstmFloatWorkload.cpp.
References TensorInfo::GetShape(), and LstmDescriptor::m_TimeMajor.
Referenced by NeonLayerSupport::IsUnidirectionalSequenceLstmSupported().
arm_compute::Status NeonUnidirectionalSequenceLstmWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const UnidirectionalSequenceLstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 491 of file NeonUnidirectionalSequenceLstmWorkload.cpp.
References TensorInfo::GetShape(), and LstmDescriptor::m_TimeMajor.
Referenced by NeonLayerSupport::IsUnidirectionalSequenceLstmSupported().
bool armnn::NextIndex | ( | const unsigned int | numDims, |
const armnn::TensorShape & | dims, | ||
std::vector< unsigned int > & | current | ||
) |
std::vector< unsigned int > NonMaxSuppression | ( | unsigned int | numBoxes, |
const std::vector< float > & | boxCorners, | ||
const std::vector< float > & | scores, | ||
float | nmsScoreThreshold, | ||
unsigned int | maxDetection, | ||
float | nmsIouThreshold | ||
) |
Definition at line 49 of file DetectionPostProcess.cpp.
References GenerateRangeK(), IntersectionOverUnion(), numeric_cast(), and TopKSort().
Referenced by DetectionPostProcess(), and TEST_SUITE().
std::enable_if_t< std::is_unsigned<Source>::value && std::is_unsigned<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 35 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
Referenced by AllocateOutputData(), ArgMinMax(), armnnTfLiteParser::AsFloatArray(), CheckInferenceTimeThreshold(), ClArgMinMaxWorkload::ClArgMinMaxWorkload(), ClSpaceToBatchNdWorkload::ClSpaceToBatchNdWorkload(), ClStridedSliceWorkload::ClStridedSliceWorkload(), ComputeReductionTensorShape(), armnnTfLiteParser::ComputeWrappedIndex(), OutputSlot::Connect(), CreateNetworkImpl< IParser >::Create(), OnnxParserImpl::CreateNetworkFromString(), DepthwiseConvolution2dAsymmetricTestImpl(), DepthwiseConvolution2dTestImpl(), DetectionPostProcess(), RefL2NormalizationWorkload::ExecuteAsync(), armnnUtils::ExpandDims(), FakeQuantization(), Gather(), MockCounterDirectory::GetCategoryCount(), MockCounterDirectory::GetCounterCount(), MockCounterDirectory::GetCounterSetCount(), MockCounterDirectory::GetDeviceCount(), IDeserializer::DeserializerImpl::GetNetworkOutputBindingInfo(), OutputSlot::GetNumConnections(), SubgraphView::GetNumInputSlots(), SubgraphView::GetNumOutputSlots(), StridedSliceDescriptor::GetStartForAxis(), StridedSliceDescriptor::GetStopForAxis(), GetStreamMetaDataPacketSize(), Cifar10Database::GetTestCaseData(), YoloDatabase::GetTestCaseData(), armnnUtils::GetUnsignedAxis(), RequestCountersPacketHandler::HandlePacket(), InferenceTestImage::InferenceTestImage(), PreluLayer::InferOutputShapes(), RefLayerSupport::IsMeanSupported(), LogSoftmax(), main(), LoadedNetwork::MakeLoadedNetwork(), NeonArgMinMaxWorkload::NeonArgMinMaxWorkload(), NeonSpaceToBatchNdWorkload::NeonSpaceToBatchNdWorkload(), NeonStridedSliceWorkload::NeonStridedSliceWorkload(), NonMaxSuppression(), ClassifierTestCaseProvider< TDatabase, InferenceModel >::OnInferenceTestFinished(), IDeserializer::DeserializerImpl::OutputShapeOfReshape(), TfLiteParserImpl::OutputShapeOfReshape(), ParseArray(), ParseDataArray< armnn::DataType::QAsymmS8 >(), ParseDataArray< armnn::DataType::QAsymmU8 >(), ParseDataArray< armnn::DataType::QSymmS8 >(), Pooling2d(), Pooling3d(), ClassifierTestCase< TTestCaseDatabase, TModel >::ProcessResult(), Reduce(), InferenceModel< IParser, TDataType >::Run(), InferenceModel< IParser, TDataType >::RunAsync(), ClContextSerializer::SaveSerializedToStream(), ISerializer::SerializerImpl::SaveSerializedToStream(), SimpleConvolution2dNhwcTestImpl(), SimpleConvolution2dTestImpl(), SimpleConvolution3dTestImpl(), InferenceTestImage::StbResize(), StridedSlice(), Graph::SubstituteSubgraph(), TEST_SUITE(), MeanQueueDescriptor::Validate(), ReduceLayer::ValidateTensorShapesFromInputs(), MeanLayer::ValidateTensorShapesFromInputs(), VerifyTimelineLabelBinaryPacketData(), and WorkingMemHandle::WorkingMemHandle().
std::enable_if_t< std::is_signed<Source>::value && std::is_integral<Source>::value && std::is_signed<Dest>::value && std::is_integral<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 58 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_floating_point<Source>::value && std::is_floating_point<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 83 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_floating_point<Source>::value && std::is_signed<Dest>::value && std::is_integral<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 109 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_signed<Source>::value && std::is_integral<Source>::value && std::is_floating_point<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 135 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_signed<Dest>::value && std::is_integral<Dest>::value && std::is_unsigned<Source>::value, Dest> armnn::numeric_cast | ( | Source | sValue | ) |
Definition at line 165 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_floating_point<Dest>::value && std::is_unsigned<Source>::value, Dest> armnn::numeric_cast | ( | Source | sValue | ) |
Definition at line 184 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_unsigned<Dest>::value && std::is_signed<Source>::value && std::is_integral<Source>::value, Dest> armnn::numeric_cast | ( | Source | sValue | ) |
Definition at line 206 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_unsigned<Dest>::value && std::is_floating_point<Source>::value, Dest> armnn::numeric_cast | ( | Source | sValue | ) |
Definition at line 230 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
|
inline |
Definition at line 19 of file BatchToSpaceNd.cpp.
References DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDataLayout(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetWidthIndex(), and NHWC.
Referenced by BatchToSpaceNd().
|
inline |
Deprecated function that will be removed together with the Compute enum.
Definition at line 47 of file BackendId.hpp.
References GetComputeDeviceAsCString().
|
inline |
Deprecated function that will be removed together with the Compute enum.
Definition at line 58 of file BackendId.hpp.
References GetComputeDeviceAsCString().
|
inline |
Definition at line 68 of file IBackendInternal.hpp.
References BackendVersion::m_Major, and BackendVersion::m_Minor.
|
inline |
Deprecated function that will be removed together with the Compute enum.
Definition at line 69 of file BackendId.hpp.
References GetComputeDeviceAsCString().
|
inline |
Definition at line 122 of file BFloat16.hpp.
References BFloat16::ToFloat32(), and BFloat16::Val().
|
inline |
Definition at line 176 of file BackendId.hpp.
std::ostream& armnn::operator<< | ( | std::ostream & | os, |
const TContainer< BackendId, TContainerTemplateArgs... > & | ids | ||
) |
Definition at line 183 of file BackendId.hpp.
|
inline |
Definition at line 297 of file TypesUtils.hpp.
References GetStatusAsCString().
|
inline |
Definition at line 304 of file TypesUtils.hpp.
References Dequantize, TensorShape::GetNumDimensions(), and Quantize.
|
inline |
Definition at line 23 of file InferenceTest.hpp.
References ParseComputeDevice(), and Undefined.
|
inline |
Definition at line 36 of file InferenceTest.hpp.
References ParseComputeDevice(), and Undefined.
IOptimizedNetworkPtr Optimize | ( | const INetwork & | network, |
const std::vector< BackendId > & | backendPreferences, | ||
const IDeviceSpec & | deviceSpec, | ||
const OptimizerOptions & | options = OptimizerOptions() , |
||
Optional< std::vector< std::string > &> | messages = EmptyOptional() |
||
) |
Create an optimized version of the network.
network | INetwork description of the network to be optimized. |
backendPreferences | The choice of the backend ordered by user preferences. |
deviceSpec | DeviceSpec object as queried from the runtime. See IRuntime::GetDeviceSpec() |
messages | If there are failures or warnings a string describing same will be added to the vector |
options | OptimizerOptions object with optimizer configuration options |
Definition at line 1847 of file Network.cpp.
References BackendOptions::Var::AsBool(), IOptimizedNetwork::Optimize, ParseOptions(), and INetwork::pNetworkImpl.
Referenced by armnn::experimental::AsyncEndToEndTestImpl(), armnn::experimental::AsyncThreadedEndToEndTestImpl(), GetSoftmaxProfilerJson(), InferenceModel< IParser, TDataType >::InferenceModel(), ParserFlatbuffersFixture::loadNetwork(), main(), QLstmEndToEnd(), QuantizedLstmEndToEnd(), ParserPrototxtFixture< TParser >::Setup(), ParserFlatbuffersSerializeFixture::Setup(), ParserPrototxtFixture< TParser >::SetupOptimizedNetwork(), TEST_CASE_FIXTURE(), TEST_SUITE(), VerifyPostOptimisationStructureTestImpl(), and IMemoryOptimizerStrategy::~IMemoryOptimizerStrategy().
IOptimizedNetworkPtr Optimize | ( | const Graph & | inGraph, |
const std::vector< BackendId > & | backendPreferences, | ||
const IDeviceSpec & | deviceSpec, | ||
const OptimizerOptions & | options, | ||
Optional< std::vector< std::string > &> | messages = EmptyOptional() |
||
) |
Create an optimized version of the network.
inGraph | Graph to be optimized. |
backendPreferences | The choice of the backend ordered by user preferences. |
deviceSpec | DeviceSpec object as queried from the runtime. See IRuntime::GetDeviceSpec() |
messages | If there are failures or warnings a string describing same will be added to the vector |
options | OptimizerOptions object with optimizer configuration options |
Definition at line 1670 of file Network.cpp.
References Graph::AddCompatibilityLayers(), ApplyBackendOptimizations(), ARMNN_LOG, ARMNN_SCOPED_PROFILING_EVENT, AssignBackends(), Graph::begin(), CreateSupportedBackends(), debug, IOptimizedNetwork::Destroy(), Graph::end(), BackendSettings::GetAvailablePreferredBackends(), ProfilerManager::GetInstance(), Graph::GetProfiler(), InferAndValidate, Graph::InferTensorInfos(), IOptimizedNetwork::IOptimizedNetwork(), OptimizerOptions::m_Debug, OptimizationResult::m_Error, OptimizerOptions::m_ImportEnabled, OptimizerOptions::m_ModelOptions, OptimizerOptions::m_ProfilingEnabled, OptimizerOptions::m_ReduceFp32ToBf16, OptimizerOptions::m_ReduceFp32ToFp16, OptimizerOptions::m_shapeInferenceMethod, BackendSettings::m_SupportedBackends, MakeOptimizations(), Optimizer::Pass(), IOptimizedNetwork::pOptimizedNetworkImpl, ProfilerManager::RegisterProfiler(), ReportError(), SelectTensorHandleStrategy(), OptimizerOptions::ToString(), Undefined, ValidateOnly, and Graph::VerifyConstantLayerSetTensorInfo().
void Pad | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | outputInfo, | ||
const ITensorHandle * | inputHandle, | ||
ITensorHandle * | outputHandle, | ||
const PadQueueDescriptor & | data | ||
) |
Definition at line 39 of file Pad.cpp.
References Decoder< IType >::Get(), TensorShape::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), PadDescriptor::m_PadList, PadDescriptor::m_PadValue, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, ITensorHandle::Map(), and Encoder< IType >::Set().
Referenced by TEST_SUITE().
bool armnn::ParseBoolean | ( | const BackendOptions::Var & | value, |
bool | defaultValue | ||
) |
Definition at line 97 of file ClBackendContext.cpp.
References BackendOptions::Var::AsBool(), and BackendOptions::Var::IsBool().
|
inline |
Definition at line 312 of file BackendOptions.hpp.
References BackendOptions::Var::AsBool(), and BackendOptions::Var::IsBool().
constexpr armnn::Compute armnn::ParseComputeDevice | ( | const char * | str | ) |
Deprecated function that will be removed together with the Compute enum.
Definition at line 182 of file TypesUtils.hpp.
References CpuAcc, CpuRef, GpuAcc, StrEqual(), and Undefined.
Referenced by operator>>().
std::string armnn::ParseFile | ( | const BackendOptions::Var & | value, |
std::string | defaultValue | ||
) |
Definition at line 106 of file ClBackendContext.cpp.
References BackendOptions::Var::AsString(), and BackendOptions::Var::IsString().
Referenced by ClBackendContext::ClBackendContext(), and ClBackendModelContext::ClBackendModelContext().
|
inline |
Definition at line 330 of file BackendOptions.hpp.
References BackendOptions::Var::AsInt(), and BackendOptions::Var::IsInt().
Referenced by ClBackendModelContext::ClBackendModelContext().
void armnn::ParseOptions | ( | const std::vector< BackendOptions > & | options, |
BackendId | backend, | ||
F | f | ||
) |
Definition at line 297 of file BackendOptions.hpp.
References BackendOptions::BackendOption::GetName(), and BackendOptions::BackendOption::GetValue().
Referenced by ClBackendContext::ClBackendContext(), ClBackendModelContext::ClBackendModelContext(), NeonBackendModelContext::NeonBackendModelContext(), Optimize(), and RuntimeImpl::RuntimeImpl().
|
inline |
Definition at line 321 of file BackendOptions.hpp.
References BackendOptions::Var::AsString(), and BackendOptions::Var::IsString().
TuningLevel armnn::ParseTuningLevel | ( | const BackendOptions::Var & | value, |
TuningLevel | defaultValue | ||
) |
Definition at line 79 of file ClBackendContext.cpp.
References ARMNN_LOG, BackendOptions::Var::AsInt(), Exhaustive, BackendOptions::Var::IsInt(), None, and warning.
Referenced by ClBackendContext::ClBackendContext().
armnn::ConstTensor PermuteTensor | ( | const ConstTensorHandle * | tensor, |
const PermutationVector & | permutationVector, | ||
void * | permuteBuffer | ||
) |
Definition at line 18 of file WorkloadUtils.cpp.
References ARMNN_ASSERT_MSG, ConstTensorHandle::GetConstTensor(), TensorInfo::GetDataType(), GetDataTypeSize(), TensorInfo::GetNumBytes(), TensorInfo::GetShape(), PermutationVector::GetSize(), ConstTensorHandle::GetTensorInfo(), Permute, armnnUtils::Permuted(), and TensorInfo::SetConstant().
Referenced by Convert1HWOTensorToAcl(), Convert1HWOtoMIHW(), ConvertWeightTensorFromArmnnToAcl(), and GatherTensorHandlePairs().
DestType armnn::PolymorphicDowncast | ( | SourceType * | value | ) |
Polymorphic downcast for build in pointers only.
Usage: Child* pChild = PolymorphicDowncast<Child*>(pBase);
DestType | Pointer type to the target object (Child pointer type) |
SourceType | Pointer type to the source object (Base pointer type) |
value | Pointer to the source object |
Definition at line 74 of file PolymorphicDowncast.hpp.
References ARMNN_POLYMORPHIC_CAST_CHECK.
Referenced by ClLayerSupport::IsLayerSupported(), and NeonLayerSupport::IsLayerSupported().
auto armnn::PolymorphicPointerDowncast | ( | const SourceType & | value | ) |
Polymorphic downcast for shared pointers and build in pointers.
Usage: auto pChild = PolymorphicPointerDowncast<Child>(pBase)
DestType | Type of the target object (Child type) |
SourceType | Pointer type to the source object (Base (shared) pointer type) |
value | Pointer to the source object |
Definition at line 93 of file PolymorphicDowncast.hpp.
References ARMNN_POLYMORPHIC_CAST_CHECK.
void Pooling2d | ( | Decoder< float > & | rInputDecoder, |
Encoder< float > & | rOutputEncoder, | ||
const TensorInfo & | inputInfo, | ||
const TensorInfo & | outputInfo, | ||
const Pooling2dDescriptor & | params | ||
) |
Computes the Pooling2d operation.
Definition at line 142 of file Pooling2d.cpp.
References Decoder< IType >::DecodeTensor(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDataLayout(), DataLayoutIndexed::GetHeightIndex(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), Pooling2dDescriptor::m_DataLayout, Pooling2dDescriptor::m_PadBottom, Pooling2dDescriptor::m_PaddingMethod, Pooling2dDescriptor::m_PadLeft, Pooling2dDescriptor::m_PadRight, Pooling2dDescriptor::m_PadTop, Pooling2dDescriptor::m_PoolHeight, Pooling2dDescriptor::m_PoolType, Pooling2dDescriptor::m_PoolWidth, Pooling2dDescriptor::m_StrideX, Pooling2dDescriptor::m_StrideY, NHWC, numeric_cast(), Pooling2d(), and Encoder< IType >::Set().
Referenced by Pooling2d(), Pooling2dLayer::Pooling2dLayer(), and TEST_SUITE().
void Pooling3d | ( | Decoder< float > & | rInputDecoder, |
Encoder< float > & | rOutputEncoder, | ||
const TensorInfo & | inputInfo, | ||
const TensorInfo & | outputInfo, | ||
const Pooling3dDescriptor & | params | ||
) |
Computes the Pooling3d operation.
Definition at line 172 of file Pooling3d.cpp.
References Decoder< IType >::DecodeTensor(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDepthIndex(), DataLayoutIndexed::GetHeightIndex(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), Pooling3dDescriptor::m_DataLayout, Pooling3dDescriptor::m_PadBack, Pooling3dDescriptor::m_PadBottom, Pooling3dDescriptor::m_PaddingMethod, Pooling3dDescriptor::m_PadFront, Pooling3dDescriptor::m_PadLeft, Pooling3dDescriptor::m_PadRight, Pooling3dDescriptor::m_PadTop, Pooling3dDescriptor::m_PoolDepth, Pooling3dDescriptor::m_PoolHeight, Pooling3dDescriptor::m_PoolType, Pooling3dDescriptor::m_PoolWidth, Pooling3dDescriptor::m_StrideX, Pooling3dDescriptor::m_StrideY, Pooling3dDescriptor::m_StrideZ, numeric_cast(), Pooling3d(), and Encoder< IType >::Set().
Referenced by Pooling3d(), and Pooling3dLayer::Pooling3dLayer().
void PreluImpl | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | alphaInfo, | ||
const TensorInfo & | outputInfo, | ||
Decoder< float > & | inputData, | ||
Decoder< float > & | alphaData, | ||
Encoder< float > & | outputData | ||
) |
Definition at line 13 of file PreluImpl.cpp.
References TensorInfo::GetShape(), and BroadcastLoop::Unroll().
Referenced by RefPreluWorkload::ExecuteAsync().
|
inline |
< Profiler used
Definition at line 180 of file Profiling.hpp.
References ProfilerManager::GetInstance(), and IProfiler::IsProfilingEnabled().
|
inline |
Definition at line 114 of file RefWorkloadUtils.hpp.
References TensorInfo::GetNumElements(), TensorInfo::GetQuantizationOffset(), and TensorInfo::GetQuantizationScale().
template int32_t Quantize< int32_t > | ( | float | value, |
float | scale, | ||
int32_t | offset | ||
) |
Quantize a floating point data type into an 8-bit data type.
Explicit specialization of Quantize for int32_t.
Explicit specialization of Quantize for int16_t.
Explicit specialization of Quantize for uint8_t.
Explicit specialization of Quantize for int8_t.
value | - The value to quantize. |
scale | - The scale (must be non-zero). |
offset | - The offset. |
Definition at line 30 of file TypesUtils.cpp.
References ARMNN_ASSERT.
Referenced by TEST_SUITE().
void Reduce | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | outputInfo, | ||
Decoder< float > & | input, | ||
Encoder< float > & | output, | ||
const std::vector< uint32_t > | axis, | ||
const ReduceOperation | reduceOperation | ||
) |
Definition at line 70 of file Reduce.cpp.
References ARMNN_ASSERT, Decoder< IType >::Get(), TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), Max, Mean, Min, NextIndex(), numeric_cast(), Prod, ReducedOutputOffset(), Encoder< IType >::Set(), and Sum.
unsigned int armnn::ReducedOutputOffset | ( | const unsigned int | numDims, |
const armnn::TensorShape & | dims, | ||
std::vector< unsigned int > & | index, | ||
const unsigned int | numAxis, | ||
const std::vector< unsigned int > & | axis | ||
) |
constexpr const char* armnn::RefBackendId | ( | ) |
Definition at line 10 of file RefBackendId.hpp.
Referenced by RefBackend::GetIdStatic().
constexpr const char* armnn::RefTensorHandleFactoryId | ( | ) |
Definition at line 15 of file RefTensorHandleFactory.hpp.
Referenced by RefTensorHandleFactory::GetIdStatic().
ConstTensor armnn::ReorderWeightChannelsForAcl | ( | const ConstTensor & | weightHandle, |
DataLayout | dataLayout, | ||
void * | permuteBuffer | ||
) |
Definition at line 67 of file WorkloadUtils.cpp.
References BaseTensor< MemoryType >::GetInfo(), TensorInfo::GetNumBytes(), BaseTensor< MemoryType >::GetShape(), NCHW, and NHWC.
void armnn::ReplaceLayers | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
std::vector< IConnectableLayer *> & | layers | ||
) |
Definition at line 364 of file ArmComputeSubgraphUtils.hpp.
References OptimizationViews::AddSubstitution().
void armnn::ReportError | ( | const std::string & | errorMessage, |
Optional< std::vector< std::string > &> | errorMessages | ||
) |
Definition at line 556 of file Network.cpp.
References ARMNN_LOG, and warning.
Referenced by AssignBackends(), CheckScaleSetOnQuantizedType(), Optimize(), and ReturnWithError().
|
inline |
Definition at line 82 of file ArmComputeSubgraphUtils.hpp.
References OptimizationViews::AddUntouchedSubgraph().
Referenced by NeonBackend::OptimizeSubgraphView(), and ClBackend::OptimizeSubgraphView().
void armnn::ReportWarning | ( | const std::string & | warningMessage, |
Optional< std::vector< std::string > &> | warningMessages | ||
) |
Definition at line 568 of file Network.cpp.
References ARMNN_LOG, and warning.
Referenced by ApplyBackendOptimizations(), and AttemptBackendAssignment().
bool armnn::RequiresCopy | ( | ITensorHandleFactory::FactoryId | src, |
ITensorHandleFactory::FactoryId | dst, | ||
TensorHandleFactoryRegistry & | registry | ||
) |
Definition at line 1247 of file Network.cpp.
References ITensorHandleFactory::GetExportFlags(), TensorHandleFactoryRegistry::GetFactory(), and ITensorHandleFactory::GetImportFlags().
Referenced by CalculateSlotOption().
void ReshapeWeightsForAcl | ( | TensorInfo & | weightInfo, |
DataLayout | dataLayout | ||
) |
Definition at line 41 of file WorkloadUtils.cpp.
References TensorInfo::GetShape(), NCHW, NHWC, and TensorInfo::SetShape().
Referenced by ConvertWeightTensorFromArmnnToAcl(), ConvertWeightTensorInfoFromArmnnToAcl(), and GatherTensorHandlePairs().
void Resize | ( | Decoder< float > & | in, |
const TensorInfo & | inputInfo, | ||
Encoder< float > & | out, | ||
const TensorInfo & | outputInfo, | ||
DataLayoutIndexed | dataLayout, | ||
armnn::ResizeMethod | resizeMethod, | ||
bool | alignCorners, | ||
bool | halfPixelCenters | ||
) |
Definition at line 65 of file Resize.cpp.
References ARMNN_ASSERT, Bilinear, Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetIndex(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), NearestNeighbor, Resize(), and Encoder< IType >::Set().
Referenced by InferenceTestImage::GetSizeInBytes(), Resize(), ResizeLayer::ResizeLayer(), and TEST_SUITE().
OptimizationResult armnn::ReturnWithError | ( | OptimizationResult | res, |
const Layer * | layer, | ||
const BackendSettings & | backendSettings, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 580 of file Network.cpp.
References GetLayerTypeAsCString(), Layer::GetType(), OptimizationResult::m_Error, BackendSettings::m_PreferredBackends, and ReportError().
Referenced by AssignBackendsIConnectable(), and AttemptBackendAssignment().
|
inline |
Definition at line 155 of file ClWorkloadUtils.hpp.
References Error, error, and WrapClError().
Referenced by ClFillWorkload::Execute(), ClPadWorkload::Execute(), ClAdditionWorkload::Execute(), ClSubtractionWorkload::Execute(), ClActivationWorkload::Execute(), ClExpWorkload::Execute(), ClPreluWorkload::Execute(), ClQuantizeWorkload::Execute(), ClConvertFp16ToFp32Workload::Execute(), ClRsqrtWorkload::Execute(), ClSinWorkload::Execute(), ClConvertFp32ToFp16Workload::Execute(), ClAbsWorkload::Execute(), ClLogWorkload::Execute(), ClSqrtWorkload::Execute(), ClLstmFloatWorkload::Execute(), ClCastWorkload::Execute(), ClNegWorkload::Execute(), ClSpaceToDepthWorkload::Execute(), ClNormalizationFloatWorkload::Execute(), ClFloorFloatWorkload::Execute(), ClResizeWorkload::Execute(), ClReshapeWorkload::Execute(), ClGatherWorkload::Execute(), ClInstanceNormalizationWorkload::Execute(), ClBatchToSpaceNdWorkload::Execute(), ClMaximumWorkload::Execute(), ClMinimumWorkload::Execute(), ClArgMinMaxWorkload::Execute(), ClChannelShuffleWorkload::Execute(), ClComparisonWorkload::Execute(), ClSliceWorkload::Execute(), ClL2NormalizationFloatWorkload::Execute(), ClDepthToSpaceWorkload::Execute(), ClDivisionWorkload::Execute(), ClPooling2dWorkload::Execute(), ClStridedSliceWorkload::Execute(), ClGatherNdWorkload::Execute(), ClSpaceToBatchNdWorkload::Execute(), ClPooling3dWorkload::Execute(), ClMultiplicationWorkload::Execute(), ClLogSoftmaxWorkload::Execute(), ClQuantizedLstmWorkload::Execute(), ClSoftmaxWorkload::Execute(), ClBatchNormalizationFloatWorkload::Execute(), ClDepthwiseConvolutionWorkload::Execute(), ClFullyConnectedWorkload::Execute(), ClConvolution3dWorkload::Execute(), ClTransposeWorkload::Execute(), ClTransposeConvolution2dWorkload::Execute(), ClPermuteWorkload::Execute(), and ClConvolution2dWorkload::Execute().
void RuntimeLoadedNetworksReserve | ( | armnn::RuntimeImpl * | runtime | ) |
Definition at line 36 of file RuntimeTests.cpp.
Referenced by TEST_SUITE().
OptimizationResult SelectTensorHandleStrategy | ( | Graph & | optGraph, |
BackendsMap & | backends, | ||
TensorHandleFactoryRegistry & | registry, | ||
bool | importEnabled, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 1601 of file Network.cpp.
References ARMNN_ASSERT, ARMNN_SCOPED_PROFILING_EVENT, CalculateEdgeStrategy(), CalculateSlotOption(), CalculateSlotOptionForInput(), CalculateSlotOptionForOutput(), Graph::ForEachLayer(), Layer::GetBackendId(), OutputSlot::GetConnections(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), Layer::GetType(), Input, ITensorHandleFactory::LegacyFactoryId, OptimizationResult::m_Error, Output, OutputSlot::SetEdgeStrategy(), OutputSlot::SetTensorHandleFactory(), and Undefined.
Referenced by Optimize(), and TEST_SUITE().
void SetAllLoggingSinks | ( | bool | standardOut, |
bool | debugOut, | ||
bool | coloured | ||
) |
Definition at line 191 of file Logging.cpp.
Referenced by SimpleLogger< Level >::AddSink(), ConfigureLogging(), main(), and TEST_SUITE().
|
inline |
Definition at line 91 of file ClWorkloadUtils.hpp.
Referenced by ClSliceWorkload::ClSliceWorkload().
|
inline |
Definition at line 70 of file ClWorkloadUtils.hpp.
Referenced by ClStridedSliceWorkload::ClStridedSliceWorkload().
void SetLogFilter | ( | LogSeverity | level | ) |
Definition at line 73 of file Logging.cpp.
References ARMNN_ASSERT, ARMNN_FALLTHROUGH, Debug, SimpleLogger< Level >::Enable(), Error, Fatal, SimpleLogger< Level >::Get(), IgnoreUnused(), Info, Trace, and Warning.
Referenced by SimpleLogger< Level >::AddSink(), ConfigureLogging(), main(), and TEST_SUITE().
|
inline |
Definition at line 167 of file Logging.cpp.
References SimpleLogger< Level >::AddSink(), SimpleLogger< Level >::Get(), and SimpleLogger< Level >::RemoveAllSinks().
|
inline |
Definition at line 113 of file NeonWorkloadUtils.hpp.
References GetOutputTensorData(), and ITensorHandle::Map().
Referenced by NeonSliceWorkload::NeonSliceWorkload().
|
inline |
Definition at line 91 of file NeonWorkloadUtils.hpp.
Referenced by NeonStridedSliceWorkload::NeonStridedSliceWorkload().
void armnn::SetValueChecked | ( | Optional< T &> | optionalRef, |
V && | val | ||
) |
Definition at line 17 of file LayerSupportCommon.hpp.
References OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().
Referenced by FalseFuncF16(), FalseFuncF32(), FalseFuncI32(), FalseFuncU8(), FalseInputFuncF16(), FalseInputFuncF32(), FalseOutputFuncF16(), FalseOutputFuncF32(), ClLayerSupport::IsConcatSupported(), NeonLayerSupport::IsConcatSupported(), ClLayerSupport::IsSplitterSupported(), and NeonLayerSupport::IsSplitterSupported().
void Slice | ( | const TensorInfo & | inputInfo, |
const SliceDescriptor & | descriptor, | ||
const void * | inputData, | ||
void * | outputData, | ||
unsigned int | dataTypeSize | ||
) |
Definition at line 14 of file Slice.cpp.
References ARMNN_ASSERT, TensorShape::GetNumDimensions(), TensorInfo::GetShape(), IgnoreUnused(), SliceDescriptor::m_Begin, and SliceDescriptor::m_Size.
Referenced by TEST_SUITE().
void Softmax | ( | Decoder< float > & | in, |
Encoder< float > & | out, | ||
const TensorInfo & | inputTensorInfo, | ||
float | beta, | ||
int | axis | ||
) |
Computes the softmax function on some inputs, into outputs, with a shape given by tensorInfo.
Definition at line 17 of file Softmax.cpp.
References ARMNN_ASSERT_MSG, Decoder< IType >::Get(), TensorShape::GetNumDimensions(), TensorInfo::GetNumDimensions(), armnnUtils::GetNumElementsBetween(), TensorInfo::GetShape(), and Encoder< IType >::Set().
Referenced by TEST_SUITE().
void SpaceToBatchNd | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | outputInfo, | ||
const SpaceToBatchNdDescriptor & | params, | ||
Decoder< float > & | inputData, | ||
Encoder< float > & | outputData | ||
) |
Definition at line 34 of file SpaceToBatchNd.cpp.
References Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), GetOffset(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), SpaceToBatchNdDescriptor::m_BlockShape, SpaceToBatchNdDescriptor::m_DataLayout, SpaceToBatchNdDescriptor::m_PadList, Encoder< IType >::Set(), and SpaceToBatchNd().
Referenced by SpaceToBatchNd(), SpaceToBatchNdLayer::SpaceToBatchNdLayer(), and TEST_SUITE().
void SpaceToDepth | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | outputInfo, | ||
const SpaceToDepthDescriptor & | params, | ||
Decoder< float > & | inputData, | ||
Encoder< float > & | outputData | ||
) |
Definition at line 36 of file SpaceToDepth.cpp.
References Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), GetOffset(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), SpaceToDepthDescriptor::m_BlockSize, SpaceToDepthDescriptor::m_DataLayout, Encoder< IType >::Set(), and SpaceToDepth().
Referenced by SpaceToDepth(), SpaceToDepthLayer::SpaceToDepthLayer(), and TEST_SUITE().
void Split | ( | const SplitterQueueDescriptor & | data, |
std::vector< ITensorHandle *> | inputs, | ||
std::vector< ITensorHandle *> | outputs | ||
) |
Definition at line 21 of file Splitter.cpp.
References ARMNN_ASSERT, Encoder< IType >::Get(), TensorInfo::GetNumDimensions(), TensorInfo::GetShape(), GetTensorInfo(), SplitterQueueDescriptor::ViewOrigin::m_Origin, SplitterQueueDescriptor::m_ViewOrigins, and MaxNumOfTensorDimensions.
Referenced by RefSplitterWorkload::ExecuteAsync(), and Splitter().
void armnn::Splitter | ( | const SplitterQueueDescriptor & | data, |
std::vector< ITensorHandle *> | inputs, | ||
std::vector< ITensorHandle *> | outputs | ||
) |
Definition at line 17 of file Splitter.hpp.
References ARMNN_ASSERT, TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), GetTensorInfo(), SplitterQueueDescriptor::ViewOrigin::m_Origin, SplitterQueueDescriptor::m_ViewOrigins, MaxNumOfTensorDimensions, and Split().
Referenced by TEST_SUITE().
void Stack | ( | const StackQueueDescriptor & | data, |
std::vector< std::unique_ptr< Decoder< float >>> & | inputs, | ||
Encoder< float > & | output, | ||
const TensorInfo & | inputInfo, | ||
const TensorInfo & | outputInfo | ||
) |
Definition at line 12 of file Stack.cpp.
References TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), StackDescriptor::m_Axis, QueueDescriptor::m_Inputs, StackDescriptor::m_NumInputs, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and Encoder< IType >::Set().
Referenced by TEST_SUITE().
constexpr bool armnn::StrEqual | ( | const char * | strA, |
const char(&) | strB[N] | ||
) |
Definition at line 170 of file TypesUtils.hpp.
Referenced by ParseComputeDevice().
void StridedSlice | ( | const TensorInfo & | inputInfo, |
const StridedSliceDescriptor & | params, | ||
const void * | inputData, | ||
void * | outputData, | ||
unsigned int | dataTypeSize | ||
) |
Definition at line 90 of file StridedSlice.cpp.
References TensorInfo::GetShape(), and numeric_cast().
Referenced by TEST_SUITE().
|
inline |
Definition at line 36 of file Logging.hpp.
References Debug, Error, Fatal, Info, Trace, and Warning.
Referenced by DelegateOptions::SetLoggingSeverity().
void armnn::swap | ( | OriginsDescriptor & | first, |
OriginsDescriptor & | second | ||
) |
Definition at line 350 of file Descriptors.cpp.
References ViewsDescriptor::swap, and swap().
Referenced by FullyConnectedFloat32Test(), FullyConnectedLargeTestCommon(), BackendId::operator=(), SquashEqualSiblingsImpl< Comparable >::Run(), BackendRegistry::Swap(), and TEST_SUITE().
void armnn::swap | ( | ViewsDescriptor & | first, |
ViewsDescriptor & | second | ||
) |
Definition at line 359 of file Descriptors.cpp.
References ViewsDescriptor::swap.
Referenced by swap().
armnn::TEST_SUITE | ( | "TestInputOutputLayerVisitor" | ) |
Definition at line 13 of file TestInputOutputLayerVisitor.cpp.
References NetworkImpl::AddInputLayer(), NetworkImpl::AddOutputLayer(), and IConnectableLayer::ExecuteStrategy().
armnn::TEST_SUITE | ( | "MemoryManagerTests" | ) |
Unit test Storing, Allocating and Deallocating with a custom allocator.
Definition at line 53 of file MemoryManagerTests.cpp.
References MemoryManager::Allocate(), MemoryManager::Deallocate(), and MemoryManager::StoreMemToAllocate().
armnn::TEST_SUITE | ( | "TestConstTensorLayerVisitor" | ) |
Definition at line 110 of file ConstTensorLayerVisitor.cpp.
References NetworkImpl::AddBatchNormalizationLayer(), NetworkImpl::AddConstantLayer(), NetworkImpl::AddConvolution2dLayer(), NetworkImpl::AddDepthwiseConvolution2dLayer(), NetworkImpl::AddFullyConnectedLayer(), NetworkImpl::AddLstmLayer(), NetworkImpl::AddQLstmLayer(), NetworkImpl::AddQuantizedLstmLayer(), IOutputSlot::Connect(), IConnectableLayer::ExecuteStrategy(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), LstmDescriptor::m_ActivationFunc, FullyConnectedDescriptor::m_BiasEnabled, Convolution2dDescriptor::m_BiasEnabled, DepthwiseConvolution2dDescriptor::m_BiasEnabled, QuantizedLstmInputParams::m_CellBias, LstmInputParams::m_CellBias, QLstmDescriptor::m_CellClip, LstmInputParams::m_CellLayerNormWeights, LstmInputParams::m_CellToForgetWeights, LstmInputParams::m_CellToInputWeights, LstmInputParams::m_CellToOutputWeights, LstmDescriptor::m_CifgEnabled, QLstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, FullyConnectedDescriptor::m_ConstantWeights, Convolution2dDescriptor::m_DataLayout, DepthwiseConvolution2dDescriptor::m_DataLayout, BatchNormalizationDescriptor::m_DataLayout, BatchNormalizationDescriptor::m_Eps, QuantizedLstmInputParams::m_ForgetGateBias, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_ForgetLayerNormWeights, QuantizedLstmInputParams::m_InputGateBias, LstmInputParams::m_InputGateBias, LstmInputParams::m_InputLayerNormWeights, QuantizedLstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToCellWeights, QuantizedLstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToForgetWeights, QuantizedLstmInputParams::m_InputToInputWeights, LstmInputParams::m_InputToInputWeights, QuantizedLstmInputParams::m_InputToOutputWeights, LstmInputParams::m_InputToOutputWeights, QLstmDescriptor::m_LayerNormEnabled, QuantizedLstmInputParams::m_OutputGateBias, LstmInputParams::m_OutputGateBias, LstmInputParams::m_OutputLayerNormWeights, Convolution2dDescriptor::m_PadBottom, DepthwiseConvolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, DepthwiseConvolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, DepthwiseConvolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, DepthwiseConvolution2dDescriptor::m_PadTop, LstmDescriptor::m_PeepholeEnabled, QLstmDescriptor::m_PeepholeEnabled, LstmInputParams::m_ProjectionBias, QLstmDescriptor::m_ProjectionClip, LstmDescriptor::m_ProjectionEnabled, QLstmDescriptor::m_ProjectionEnabled, LstmInputParams::m_ProjectionWeights, QuantizedLstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToCellWeights, QuantizedLstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToForgetWeights, QuantizedLstmInputParams::m_RecurrentToInputWeights, LstmInputParams::m_RecurrentToInputWeights, QuantizedLstmInputParams::m_RecurrentToOutputWeights, LstmInputParams::m_RecurrentToOutputWeights, Convolution2dDescriptor::m_StrideX, DepthwiseConvolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, DepthwiseConvolution2dDescriptor::m_StrideY, FullyConnectedDescriptor::m_TransposeWeightMatrix, NHWC, QAsymmU8, QSymmS16, QSymmS8, and Signed32.
Referenced by TEST_SUITE().
void TopKSort | ( | unsigned int | k, |
unsigned int * | indices, | ||
const float * | values, | ||
unsigned int | numElement | ||
) |
Definition at line 24 of file DetectionPostProcess.cpp.
Referenced by DetectionPostProcess(), NonMaxSuppression(), and TEST_SUITE().
void TransposeConvolution2dImpl | ( | const TransposeConvolution2dDescriptor & | descriptor, |
const TensorShape & | inputShape, | ||
Decoder< float > & | inputDecoder, | ||
const TensorShape & | outputShape, | ||
Encoder< float > & | outputEncoder, | ||
const TensorShape & | weightsShape, | ||
Decoder< float > & | weightsDecoder, | ||
Decoder< float > * | biasesDecoder | ||
) |
Definition at line 15 of file TransposeConvolution2d.cpp.
References Decoder< IType >::DecodeTensor(), Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetIndex(), TensorShape::GetNumElements(), DataLayoutIndexed::GetWidthIndex(), TransposeConvolution2dDescriptor::m_BiasEnabled, TransposeConvolution2dDescriptor::m_DataLayout, TransposeConvolution2dDescriptor::m_PadLeft, TransposeConvolution2dDescriptor::m_PadTop, TransposeConvolution2dDescriptor::m_StrideX, TransposeConvolution2dDescriptor::m_StrideY, NHWC, and Encoder< IType >::Set().
Referenced by RefTransposeConvolution2dWorkload::ExecuteAsync().
bool armnn::TrueFunc | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
|
inline |
Definition at line 157 of file ClContextSchema_generated.h.
References ClContextIdentifier().
|
inline |
Definition at line 162 of file ClContextSchema_generated.h.
References ClContextIdentifier().
|
inline |
Definition at line 337 of file TypesUtils.hpp.
References TensorInfo::GetDataType(), GetDataTypeName(), and TensorInfo::GetShape().
Referenced by ParserFlatbuffersFixture::CheckTensors(), ParserFlatbuffersSerializeFixture::RunTest(), and ParserFlatbuffersFixture::RunTest().
|
inline |
Definition at line 147 of file ClWorkloadUtils.hpp.
References Exception::what().
Referenced by ClWorkloadFactory::AfterWorkloadsCreated(), and RunClFunction().
const BackendCapabilities cpuAccCapabilities("CpuAcc", { {"NonConstWeights", false}, {"AsyncExecution", false}, {"ProtectedContentAllocation", false}, {"ConstantTensorsAsInputs", true}, {"PreImportIOTensors", false}, {"ExternallyManagedMemory", true}, {"MultiAxisPacking", false}, {"SingleAxisPacking", true} }) |
Referenced by NeonBackend::GetCapabilities().
const BackendCapabilities cpuRefCapabilities("CpuRef", { {"NonConstWeights", true}, {"AsyncExecution", true}, {"ProtectedContentAllocation", false}, {"ConstantTensorsAsInputs", true}, {"PreImportIOTensors", true}, {"ExternallyManagedMemory", true}, {"MultiAxisPacking", false}, {"SingleAxisPacking", true} }) |
Referenced by RefBackend::GetCapabilities().
constexpr unsigned int EXPIRE_RATE = 3U |
Variable to control expire rate of priority queue.
Definition at line 37 of file Types.hpp.
Referenced by Threadpool::TerminateThreadPool().
constexpr bool g_AggregateProfilingEventsByInference = true |
Definition at line 37 of file Profiling.cpp.
constexpr std::size_t g_ProfilingEventCountHint = 1024 |
Definition at line 29 of file Profiling.cpp.
constexpr bool g_WriteProfilingEventSequence = true |
Definition at line 32 of file Profiling.cpp.
constexpr bool g_WriteReportToStdOutOnProfilerDestruction = false |
Definition at line 41 of file Profiling.cpp.
const BackendCapabilities gpuAccCapabilities("GpuAcc", { {"NonConstWeights", false}, {"AsyncExecution", false}, {"ProtectedContentAllocation", true}, {"ConstantTensorsAsInputs", true}, {"PreImportIOTensors", false}, {"ExternallyManagedMemory", true}, {"MultiAxisPacking", false}, {"SingleAxisPacking", true} }) |
Referenced by ClBackend::GetCapabilities().
constexpr unsigned int LOWEST_CAPTURE_PERIOD = 10000u |
The lowest performance data capture interval we support is 10 miliseconds.
Definition at line 34 of file Types.hpp.
Referenced by TEST_SUITE().
constexpr unsigned int MaxNumOfTensorDimensions = 5U |
Definition at line 31 of file Types.hpp.
Referenced by armnnTfLiteParser::ComputeWrappedIndex(), Concatenate(), CopyTensorContentsGeneric(), TensorShape::IsAtLeastOneDimensionSpecified(), TfLiteParserImpl::OutputShapeOfReshape(), PermutationVector::PermutationVector(), armnnUtils::Permuted(), Split(), Splitter(), TEST_SUITE(), armnnDeserializer::ToTensorInfo(), and armnnUtils::TransposeTensorShape().
const std::set<armnn::BackendCapability> oldCpuRefCapabilities |
Definition at line 24 of file RefBackend.hpp.
const std::set<armnn::LayerType> paddingRequiredLayers |
Definition at line 16 of file NeonTensorHandleFactory.hpp.
Referenced by NeonTensorHandleFactory::GetCapabilities().
thread_local IProfiler* tl_Profiler = nullptr |
Definition at line 570 of file Profiling.cpp.
Referenced by ProfilerManager::GetProfiler().
constexpr size_t wordSize = sizeof(size_t) * 8 |
Definition at line 22 of file SingleAxisPriorityList.cpp.
Referenced by SingleAxisPriorityList::GetMemBlockStrategyType().