21.02
|
Copyright (c) 2021 ARM Limited and Contributors. More...
Namespaces | |
gatordmock | |
optimizations | |
profiling | |
stringUtils | |
test | |
timelinedecoder | |
utility | |
Functions | |
LayerSupportHandle | GetILayerSupportByBackendId (const armnn::BackendId &backend) |
Convenience function to retrieve the ILayerSupportHandle for a backend. More... | |
constexpr char const * | GetComputeDeviceAsCString (Compute compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const std::vector< Compute > &compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const std::set< Compute > &compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const Compute &compute) |
Deprecated function that will be removed together with the Compute enum. More... | |
std::ostream & | operator<< (std::ostream &os, const BackendId &id) |
template<template< typename... > class TContainer, typename... TContainerTemplateArgs> | |
std::ostream & | operator<< (std::ostream &os, const TContainer< BackendId, TContainerTemplateArgs... > &ids) |
template<typename F > | |
void | ParseOptions (const std::vector< BackendOptions > &options, BackendId backend, F f) |
BackendRegistry & | BackendRegistryInstance () |
std::ostream & | operator<< (std::ostream &os, const BackendVersion &backendVersion) |
template<typename TensorShapeIt > | |
OriginsDescriptor | CreateMergerDescriptorForConcatenation (TensorShapeIt first, TensorShapeIt last, unsigned int concatenationDimension) |
template<typename TensorShapeIt > | |
OriginsDescriptor | CreateDescriptorForConcatenation (TensorShapeIt first, TensorShapeIt last, unsigned int concatenationDimension) |
Convenience template to create an OriginsDescriptor to use when creating a ConcatLayer for performing concatenation of a number of input tensors. More... | |
template<typename ExceptionType > | |
void | ConditionalThrow (bool condition, const std::string &message) |
template<typename ExceptionType > | |
void | ConditionalThrow (bool condition) |
template<typename ExceptionType , typename ComparedType > | |
void | ConditionalThrowIfNotEqual (const std::string &message, const ComparedType &leftHandSide, const ComparedType &rightHandSide) |
ComparedType must support: operator==(const ComparedType&) operator<<(ostream&, const ComparedType&) More... | |
IOptimizedNetworkPtr | Optimize (const INetwork &network, const std::vector< BackendId > &backendPreferences, const IDeviceSpec &deviceSpec, const OptimizerOptions &options=OptimizerOptions(), Optional< std::vector< std::string > &> messages=EmptyOptional()) |
Create an optimized version of the network. More... | |
bool | IsActivationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ActivationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsAdditionSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsBatchNormalizationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const TensorInfo &mean, const TensorInfo &var, const TensorInfo &beta, const TensorInfo &gamma, const BatchNormalizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsBatchToSpaceNdSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const BatchToSpaceNdDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConcatSupported (const BackendId &backend, const std::vector< const TensorInfo *> inputs, const TensorInfo &output, const OriginsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConstantSupported (const BackendId &backend, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConvertFp16ToFp32Supported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConvertFp32ToFp16Supported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsConvolution2dSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const Convolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDebugSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDepthwiseConvolutionSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const DepthwiseConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDequantizeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsDivisionSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsEqualSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsFakeQuantizationSupported (const BackendId &backend, const TensorInfo &input, const FakeQuantizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsFloorSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsFullyConnectedSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const TensorInfo &weights, const TensorInfo &biases, const FullyConnectedDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsGreaterSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsInputSupported (const BackendId &backend, const TensorInfo &input, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsL2NormalizationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const L2NormalizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsLstmSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &scratchBuffer, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const LstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMaximumSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnSupported=nullptr, size_t reasonIfUnSupportedMaxLength=0) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMeanSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const MeanDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMemCopySupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMergeSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMergerSupported (const BackendId &backend, const std::vector< const TensorInfo *> inputs, const TensorInfo &output, const OriginsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMinimumSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsMultiplicationSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsNormalizationSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const NormalizationDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsOutputSupported (const BackendId &backend, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPadSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const PadDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPermuteSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const PermuteDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPreCompiledSupported (const BackendId &backend, const TensorInfo &input, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPreluSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &alpha, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsPooling2dSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const Pooling2dDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsQuantizedLstmSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &previousCellStateIn, const TensorInfo &previousOutputIn, const TensorInfo &cellStateOut, const TensorInfo &output, const QuantizedLstmInputParamsInfo ¶msInfo, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsReduceSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ReduceDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsReshapeSupported (const BackendId &backend, const TensorInfo &input, const ReshapeDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsResizeBilinearSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsResizeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ResizeDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsRsqrtSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSoftmaxSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const SoftmaxDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSpaceToBatchNdSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const SpaceToBatchNdDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSpaceToDepthSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const SpaceToDepthDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSplitterSupported (const BackendId &backend, const TensorInfo &input, const ViewsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
bool | IsSplitterSupported (const BackendId &backend, const TensorInfo &input, const std::vector< std::reference_wrapper< TensorInfo >> &outputs, const ViewsDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsStackSupported (const BackendId &backend, const std::vector< const TensorInfo *> inputs, const TensorInfo &output, const StackDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsStridedSliceSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const StridedSliceDescriptor &descriptor, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSubtractionSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsSwitchSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output0, const TensorInfo &output1, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
bool | IsTransposeConvolution2dSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const TransposeConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, char *reasonIfUnsupported=nullptr, size_t reasonIfUnsupportedMaxLength=1024) |
Deprecated in favor of IBackend and ILayerSupport interfaces. More... | |
std::string | LevelToString (LogSeverity level) |
LogSeverity | StringToLogLevel (std::string level) |
void | SetLogFilter (LogSeverity level) |
void | SetAllLoggingSinks (bool standardOut, bool debugOut, bool coloured) |
constexpr LogSeverity | ConvertLogSeverity (BoostLogSeverityMapping severity) |
template<typename Arg , typename std::enable_if< IsMemorySource< Arg >::value >::type * = nullptr> | |
MemorySourceFlags | Combine (Arg sourceA, Arg sourceB) |
template<typename Arg , typename ... Args, typename std::enable_if< IsMemorySource< Arg >::value >::type * = nullptr> | |
MemorySourceFlags | Combine (Arg source, Args... rest) |
bool | CheckFlag (MemorySourceFlags flags, MemorySource source) |
template<typename T , class... Args> | |
Optional< T > | MakeOptional (Args &&... args) |
Utility template that constructs an object of type T in-place and wraps it inside an Optional<T> object. More... | |
const char * | GetLayerTypeAsCString (LayerType type) |
constexpr char const * | GetStatusAsCString (Status status) |
constexpr char const * | GetActivationFunctionAsCString (ActivationFunction activation) |
constexpr char const * | GetArgMinMaxFunctionAsCString (ArgMinMaxFunction function) |
constexpr char const * | GetComparisonOperationAsCString (ComparisonOperation operation) |
constexpr char const * | GetUnaryOperationAsCString (UnaryOperation operation) |
constexpr char const * | GetLogicalBinaryOperationAsCString (LogicalBinaryOperation operation) |
constexpr char const * | GetPoolingAlgorithmAsCString (PoolingAlgorithm pooling) |
constexpr char const * | GetOutputShapeRoundingAsCString (OutputShapeRounding rounding) |
constexpr char const * | GetPaddingMethodAsCString (PaddingMethod method) |
constexpr unsigned int | GetDataTypeSize (DataType dataType) |
template<unsigned N> | |
constexpr bool | StrEqual (const char *strA, const char(&strB)[N]) |
constexpr armnn::Compute | ParseComputeDevice (const char *str) |
Deprecated function that will be removed together with the Compute enum. More... | |
constexpr const char * | GetDataTypeName (DataType dataType) |
constexpr const char * | GetDataLayoutName (DataLayout dataLayout) |
constexpr const char * | GetNormalizationAlgorithmChannelAsCString (NormalizationAlgorithmChannel channel) |
constexpr const char * | GetNormalizationAlgorithmMethodAsCString (NormalizationAlgorithmMethod method) |
constexpr const char * | GetResizeMethodAsCString (ResizeMethod method) |
template<typename T > | |
constexpr bool | IsQuantizedType () |
constexpr bool | IsQuantized8BitType (DataType dataType) |
constexpr bool | IsQuantizedType (DataType dataType) |
std::ostream & | operator<< (std::ostream &os, Status stat) |
std::ostream & | operator<< (std::ostream &os, const armnn::TensorShape &shape) |
template<typename QuantizedType > | |
QuantizedType | Quantize (float value, float scale, int32_t offset) |
Quantize a floating point data type into an 8-bit data type. More... | |
template<typename QuantizedType > | |
float | Dequantize (QuantizedType value, float scale, int32_t offset) |
Dequantize an 8-bit data type into a floating point data type. More... | |
void | VerifyTensorInfoDataType (const armnn::TensorInfo &info, armnn::DataType dataType) |
template<typename ... Ts> | |
void | IgnoreUnused (Ts &&...) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_unsigned< Source >::value &&std::is_unsigned< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_signed< Source >::value &&std::is_integral< Source >::value &&std::is_signed< Dest >::value &&std::is_integral< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_floating_point< Source >::value &&std::is_floating_point< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_floating_point< Source >::value &&std::is_signed< Dest >::value &&std::is_integral< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_signed< Source >::value &&std::is_integral< Source >::value &&std::is_floating_point< Dest >::value, Dest > | numeric_cast (Source source) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_signed< Dest >::value &&std::is_integral< Dest >::value &&std::is_unsigned< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_floating_point< Dest >::value &&std::is_unsigned< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_unsigned< Dest >::value &&std::is_signed< Source >::value &&std::is_integral< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename Dest , typename Source > | |
std::enable_if_t< std::is_unsigned< Dest >::value &&std::is_floating_point< Source >::value, Dest > | numeric_cast (Source sValue) |
template<typename DestType , typename SourceType > | |
DestType | PolymorphicDowncast (SourceType value) |
Polymorphic downcast for build in pointers only. More... | |
template<typename DestType , typename SourceType > | |
auto | PolymorphicPointerDowncast (const SourceType &value) |
Polymorphic downcast for shared pointers and build in pointers. More... | |
std::chrono::high_resolution_clock::time_point | GetTimeNow () |
std::chrono::duration< double, std::milli > | GetTimeDuration (std::chrono::high_resolution_clock::time_point start_time) |
template<typename Function , typename Iterator > | |
constexpr TransformIterator< Function, Iterator > | MakeTransformIterator (Iterator i, Function f) |
void | ConfigureLogging (bool printToStandardOutput, bool printToDebugOutput, LogSeverity severity) |
Configures the logging behaviour of the ARMNN library. More... | |
bool | NeonDetected () |
const std::string | GetVersion () |
template<typename T > | |
bool | CompatibleTypes (DataType) |
template<> | |
bool | CompatibleTypes< float > (DataType dataType) |
template<> | |
bool | CompatibleTypes< Half > (DataType dataType) |
template<> | |
bool | CompatibleTypes< BFloat16 > (DataType dataType) |
template<> | |
bool | CompatibleTypes< uint8_t > (DataType dataType) |
template<> | |
bool | CompatibleTypes< int8_t > (DataType dataType) |
template<> | |
bool | CompatibleTypes< int16_t > (DataType dataType) |
template<> | |
bool | CompatibleTypes< int32_t > (DataType dataType) |
void | swap (OriginsDescriptor &first, OriginsDescriptor &second) |
void | swap (ViewsDescriptor &first, ViewsDescriptor &second) |
template<typename T > | |
constexpr LayerType | LayerEnumOf (const T *=nullptr) |
template<> | |
constexpr LayerType | LayerEnumOf (const ActivationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const AdditionLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ArgMinMaxLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const BatchNormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const BatchToSpaceNdLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ComparisonLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConcatLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConstantLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertBf16ToFp32Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertFp16ToFp32Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertFp32ToBf16Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ConvertFp32ToFp16Layer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const Convolution2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DebugLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DepthToSpaceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DepthwiseConvolution2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DequantizeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DetectionPostProcessLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const DivisionLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ElementwiseUnaryLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FakeQuantizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FillLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FloorLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const FullyConnectedLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const GatherLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const InputLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const InstanceNormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const L2NormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const LogicalBinaryLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const LogSoftmaxLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const LstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MapLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MaximumLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MeanLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MemCopyLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MemImportLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MergeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MinimumLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const MultiplicationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const NormalizationLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const OutputLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PadLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PermuteLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const Pooling2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PreCompiledLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const PreluLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const QuantizeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const QLstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const QuantizedLstmLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const RankLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ReduceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ReshapeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const ResizeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SliceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SoftmaxLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SpaceToBatchNdLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SpaceToDepthLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SplitterLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const StackLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const StandInLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const StridedSliceLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SubtractionLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const SwitchLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const TransposeLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const TransposeConvolution2dLayer *) |
template<> | |
constexpr LayerType | LayerEnumOf (const UnmapLayer *) |
bool | CheckTensorDataTypesEqual (const TensorInfo &input0, const TensorInfo &input1) |
bool | IsArgMinMaxSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ArgMinMaxDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsConcatSupported (const BackendId &backend, std::vector< const TensorInfo *> inputs, const TensorInfo &output, const OriginsDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsDetectionPostProcessSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const DetectionPostProcessDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsGatherSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsGatherSupported (const BackendId &backend, const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const GatherDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsMemImportSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsMergerSupported (const BackendId &backend, std::vector< const TensorInfo *> inputs, const TensorInfo &output, const OriginsDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsQuantizeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsQLstmSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &previousOutputIn, const TensorInfo &previousCellStateIn, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const QLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
bool | IsReshapeSupported (const BackendId &backend, const TensorInfo &input, const TensorInfo &output, const ReshapeDescriptor &descriptor, char *reasonIfUnsupported, size_t reasonIfUnsupportedMaxLength) |
template<typename T , typename V > | |
void | SetValueChecked (Optional< T &> optionalRef, V &&val) |
template<typename Float16Func , typename Float32Func , typename Uint8Func , typename Int32Func , typename BooleanFunc , typename ... Params> | |
bool | IsSupportedForDataTypeGeneric (Optional< std::string &> reasonIfUnsupported, DataType dataType, Float16Func float16FuncPtr, Float32Func float32FuncPtr, Uint8Func uint8FuncPtr, Int32Func int32FuncPtr, BooleanFunc booleanFuncPtr, Params &&... params) |
template<typename ... Params> | |
bool | TrueFunc (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFunc (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncF16 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncF32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncU8 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseFuncI32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseInputFuncF32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseInputFuncF16 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseOutputFuncF32 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<typename ... Params> | |
bool | FalseOutputFuncF16 (Optional< std::string &> reasonIfUnsupported, Params &&... params) |
template<LogSeverity Level> | |
void | SetLoggingSinks (bool standardOut, bool debugOut, bool coloured) |
void | ReportError (const std::string &errorMessage, Optional< std::vector< std::string > &> errorMessages) |
void | ReportWarning (const std::string &warningMessage, Optional< std::vector< std::string > &> warningMessages) |
OptimizationResult | ReturnWithError (OptimizationResult res, const Layer *layer, const BackendSettings &backendSettings, Optional< std::vector< std::string > &> errMessages) |
bool | CheckScaleSetOnQuantizedType (Layer *layer, Optional< std::vector< std::string > &> errMessages) |
template<typename LayerT > | |
LayerT * | ConvertBf16ToFp32Weight (Layer *l) |
OptimizationResult | AttemptBackendAssignment (BackendSettings &backendSettings, Graph &graph, Layer *layer, BackendId backend, DataType dataTypeIn, DataType dataTypeOut, const std::vector< BackendId > &availablePreferredBackends, std::string &reasonIfUnsupported, Optional< std::vector< std::string > &> errMessages) |
OptimizationResult | AssignBackends (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, Graph::Iterator &firstLayer, Graph::Iterator &lastLayer, Optional< std::vector< std::string > &> errMessages) |
OptimizationResult | AssignBackends (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, SubgraphView &subgraph, Optional< std::vector< std::string > &> errMessages) |
BackendsMap | CreateSupportedBackends (TensorHandleFactoryRegistry &handleFactoryRegistry, BackendSettings &backendSettings) |
OptimizationResult | ApplyBackendOptimizations (OptimizedNetworkImpl *optNetObjPtr, BackendSettings &backendSettings, BackendsMap &backends, const ModelOptions &modelOptions, Optional< std::vector< std::string > &> errMessages) |
bool | RequiresCopy (ITensorHandleFactory::FactoryId src, ITensorHandleFactory::FactoryId dst, TensorHandleFactoryRegistry ®istry) |
ITensorHandleFactory::FactoryId | CalculateSlotOptionForInput (BackendsMap &backends, OutputSlot &slot, TensorHandleFactoryRegistry ®istry) |
ITensorHandleFactory::FactoryId | CalculateSlotOptionForOutput (BackendsMap &backends, OutputSlot &slot, TensorHandleFactoryRegistry ®istry) |
ITensorHandleFactory::FactoryId | CalculateSlotOption (BackendsMap &backends, OutputSlot &outputSlot, TensorHandleFactoryRegistry ®istry) |
EdgeStrategy | CalculateEdgeStrategy (BackendsMap &backends, ITensorHandleFactory::FactoryId srcFactoryId, const Layer &layer, const Layer &connectedLayer, TensorHandleFactoryRegistry ®istry, bool importEnabled) |
OptimizationResult | SelectTensorHandleStrategy (Graph &optGraph, BackendsMap &backends, TensorHandleFactoryRegistry ®istry, bool importEnabled, Optional< std::vector< std::string > &> errMessages) |
ConstTensor | CreateQuantizedConst (const ConstTensor &tensor, std::vector< uint8_t > &backing) |
template<typename srcType > | |
void | QuantizeConstant (const srcType *src, uint8_t *dst, size_t numElements, float &scale, int &offset) |
template<typename LayerContainer > | |
void | VisitLayers (const LayerContainer &layerContainer, ILayerVisitor &visitor) |
template<typename LayerContainer > | |
void | ApplyStrategyToLayers (const LayerContainer &layerContainer, IStrategy &strategy) |
std::vector< ConvertBf16ToFp32Layer * > | InsertConvertBf16ToFp32LayersBefore (Graph &graph, Layer &layer, bool expectCorrectInputType) |
std::vector< ConvertFp32ToBf16Layer * > | InsertConvertFp32ToBf16LayersBefore (Graph &graph, Layer &layer, bool expectCorrectInputType) |
std::vector< ConvertFp16ToFp32Layer * > | InsertConvertFp16ToFp32LayersBefore (Graph &graph, Layer &layer, bool expectCorrectInputType) |
std::vector< ConvertFp32ToBf16Layer * > | InsertConvertFp32ToBf16LayersAfter (Graph &graph, Layer &layer) |
std::vector< ConvertFp32ToFp16Layer * > | InsertConvertFp32ToFp16LayersAfter (Graph &graph, Layer &layer) |
std::vector< DebugLayer * > | InsertDebugLayerAfter (Graph &graph, Layer &layer) |
template<typename T > | |
void | Append (Optimizer::Optimizations &optimizations, T &&optimization) |
template<typename Front , typename... Others> | |
void | Append (Optimizer::Optimizations &optimizations, Front &&front, Others &&... others) |
template<typename... Args> | |
Optimizer::Optimizations | MakeOptimizations (Args &&... args) |
Measurement | FindMeasurement (const std::string &name, const Event *event) |
std::vector< Measurement > | FindKernelMeasurements (const Event *event) |
const Event * | GetEventPtr (const Event *ptr) |
const Event * | GetEventPtr (const std::unique_ptr< Event > &ptr) |
int | CalcLevel (const Event *eventPtr) |
void | ExtractJsonObjects (unsigned int inferenceIndex, const Event *parentEvent, JsonChildObject &parentObject, std::map< const Event *, std::vector< const Event *>> descendantsMap) |
template<typename Delegate > | |
void | ForEachLayerInput (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo, Delegate function) |
template<typename Delegate > | |
void | ForEachLayerOutput (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo, Delegate function) |
void | AssignSplitId (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo) |
bool | IsReadyForSplitAssignment (LayerSelectionInfo::LayerInfoContainer &layerInfos, LayerSelectionInfo &layerInfo) |
BOOST_AUTO_TEST_CASE (CheckConvolution2dLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedConvolution2dLayer) | |
BOOST_AUTO_TEST_CASE (CheckConvolution2dLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckNamedConvolution2dLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckDepthwiseConvolution2dLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedDepthwiseConvolution2dLayer) | |
BOOST_AUTO_TEST_CASE (CheckDepthwiseConvolution2dLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckNamedDepthwiseConvolution2dLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckFullyConnectedLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedFullyConnectedLayer) | |
BOOST_AUTO_TEST_CASE (CheckFullyConnectedLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckNamedFullyConnectedLayerWithBiases) | |
BOOST_AUTO_TEST_CASE (CheckBatchNormalizationLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedBatchNormalizationLayer) | |
BOOST_AUTO_TEST_CASE (CheckConstLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedConstLayer) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerBasic) | |
BOOST_AUTO_TEST_CASE (CheckNamedLstmLayerBasic) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerCifgDisabled) | |
BOOST_AUTO_TEST_CASE (CheckNamedLstmLayerCifgDisabled) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerPeephole) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerPeepholeCifgDisabled) | |
BOOST_AUTO_TEST_CASE (CheckNamedLstmLayerPeephole) | |
BOOST_AUTO_TEST_CASE (CheckLstmLayerProjection) | |
BOOST_AUTO_TEST_CASE (CheckNamedLstmLayerProjection) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerBasic) | |
BOOST_AUTO_TEST_CASE (CheckNamedQLstmLayerBasic) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerCifgDisabled) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerCifgDisabledPeepholeEnabled) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerCifgEnabledPeepholeEnabled) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerProjectionEnabled) | |
BOOST_AUTO_TEST_CASE (CheckQLstmLayerCifgDisabledLayerNormEnabled) | |
BOOST_AUTO_TEST_CASE (CheckQuantizedLstmLayer) | |
BOOST_AUTO_TEST_CASE (CheckNamedQuantizedLstmLayer) | |
template<typename T > | |
std::vector< T > | GetVector (unsigned int size, float initial, float increment) |
template<typename LayerTest , DataType ArmnnType> | |
INetworkPtr | CreatNetwork (ActivationDescriptor activationDescriptor, bool preventFusing, float scale, int32_t offset) |
template<typename LayerTest , DataType ArmnnType, typename LayerType = typename LayerTest::LayerType, typename T = ResolveType<ArmnnType>> | |
void | FuseActivationIntoPreviousLayerTest (ActivationDescriptor activationDescriptor, float tolerance, Compute backendId, float scale=1.f, int32_t offset=0) |
template<typename LayerTest , DataType ArmnnType, typename LayerType = typename LayerTest::LayerType, typename T = ResolveType<ArmnnType>> | |
bool | FuseActivationSimpleTest (ActivationDescriptor activationDescriptor, Compute backendId, float scale=1.f, int32_t offset=0) |
size_t | GetProfilerEventSequenceSize (armnn::IProfiler *profiler) |
void | VisitLayersTopologically (const INetwork *inputNetwork, IStrategy &visitor) |
TensorInfo | GetInputTensorInfo (const INetwork *network) |
TensorInfo | GetInputTensorInfo (const NetworkImpl *network) |
void | TestNetwork (INetwork *network, const TensorShape inShape, const TensorShape outShape) |
void | TestNetwork (INetwork *network, const TensorShape shape) |
BOOST_AUTO_TEST_CASE (QuantizeAddition) | |
INetworkPtr | CreateNetworkWithActivationLayer (const ActivationDescriptor &descriptor, const TensorShape &shape) |
INetworkPtr | CreateNetworkWithArgMinMaxLayer (const ArgMinMaxDescriptor &descriptor, const TensorShape &shape) |
INetworkPtr | CreateNetworkWithInputOutputLayers () |
BOOST_AUTO_TEST_CASE (InputOutputLayerDynamicQuant) | |
BOOST_AUTO_TEST_CASE (QuantizeAbsActivation) | |
BOOST_AUTO_TEST_CASE (QuantizeArgMax) | |
BOOST_AUTO_TEST_CASE (QuantizeLinearActivation) | |
BOOST_AUTO_TEST_CASE (QuantizeReLuActivation) | |
BOOST_AUTO_TEST_CASE (QuantizeSoftReLuActivation) | |
BOOST_AUTO_TEST_CASE (QuantizeBoundedReluActivation) | |
BOOST_AUTO_TEST_CASE (QuantizeTanHActivation) | |
BOOST_AUTO_TEST_CASE (QuantizeLeakyReLuActivation) | |
BOOST_AUTO_TEST_CASE (QuantizeELuActivation) | |
BOOST_AUTO_TEST_CASE (QuantizeHardSwishActivation) | |
BOOST_AUTO_TEST_CASE (QuantizeBatchNorm) | |
BOOST_AUTO_TEST_CASE (QuantizeDepthToSpace) | |
BOOST_AUTO_TEST_CASE (OverrideInputRangeEmptyNetwork) | |
BOOST_AUTO_TEST_CASE (OverrideInputRangeNoInputLayers) | |
BOOST_AUTO_TEST_CASE (OverrideInputRangeInputLayers) | |
INetworkPtr | CreateNetworkWithFullyConnectedLayer (const bool biasEnabled, const TensorShape &inputShape, const TensorShape &outputShape) |
void | ValidateFullyConnectedLayer (const bool biasEnabled) |
BOOST_AUTO_TEST_CASE (QuantizeFill) | |
BOOST_AUTO_TEST_CASE (QuantizeFullyConnected) | |
BOOST_AUTO_TEST_CASE (QuantizeFullyConnectedBiasEnabled) | |
void | TestQuantizeConvolution2d (bool useBiases) |
BOOST_AUTO_TEST_CASE (QuantizeConvolution2d) | |
BOOST_AUTO_TEST_CASE (QuantizeConvolution2dWithBiases) | |
void | TestQuantizeDepthwiseConvolution2d (bool useBiases) |
BOOST_AUTO_TEST_CASE (QuantizeDepthwiseConvolution2d) | |
BOOST_AUTO_TEST_CASE (QuantizeDepthwiseConvolution2dWithBiases) | |
BOOST_AUTO_TEST_CASE (QuantizeInstanceNormalization) | |
BOOST_AUTO_TEST_CASE (QuantizeLogSoftmax) | |
INetworkPtr | CreateNetworkWithSoftmaxLayer (const SoftmaxDescriptor &descriptor, const TensorShape &shape) |
BOOST_AUTO_TEST_CASE (QuantizeSoftmax) | |
BOOST_AUTO_TEST_CASE (QuantizeStandIn) | |
IConnectableLayer * | CreateStartOfLeakyReluNetwork (INetwork *network, const TensorInfo &info) |
void | CompleteLeakyReluNetwork (INetwork *network, IConnectableLayer *activation, IConnectableLayer *layerUnderTest, const TensorInfo &info) |
BOOST_AUTO_TEST_CASE (QuantizePermute) | |
BOOST_AUTO_TEST_CASE (QuantizeSpaceToBatch) | |
BOOST_AUTO_TEST_CASE (QuantizeSpaceToDepth) | |
BOOST_AUTO_TEST_CASE (QuantizePooling2d) | |
BOOST_AUTO_TEST_CASE (QuantizeConstant) | |
BOOST_AUTO_TEST_CASE (QuantizeArgMinMax) | |
BOOST_AUTO_TEST_CASE (QuantizeComparison) | |
BOOST_AUTO_TEST_CASE (QuantizeConcat) | |
BOOST_AUTO_TEST_CASE (QuantizeReshape) | |
BOOST_AUTO_TEST_CASE (QuantizeSplitter) | |
BOOST_AUTO_TEST_CASE (QuantizeResize) | |
BOOST_AUTO_TEST_CASE (QuantizeStridedSlice) | |
BOOST_AUTO_TEST_CASE (QuantizeBatchToSpace) | |
BOOST_AUTO_TEST_CASE (QuantizePrelu) | |
void | TestQuantizeTransposeConvolution2d (bool useBiases) |
BOOST_AUTO_TEST_CASE (QuantizeTransposeConvolution2d) | |
BOOST_AUTO_TEST_CASE (QuantizeTransposeConvolution2dWithBiases) | |
BOOST_AUTO_TEST_CASE (QuantizeStack) | |
BOOST_AUTO_TEST_CASE (QuantizeSlice) | |
std::vector< uint8_t > | SetupQuantize (float value) |
BOOST_AUTO_TEST_CASE (QuantizeInf) | |
BOOST_AUTO_TEST_CASE (QuantizeNegativeInf) | |
void | PreserveTypeTestImpl (const DataType &dataType) |
BOOST_AUTO_TEST_CASE (PreserveTypeFloat32) | |
BOOST_AUTO_TEST_CASE (PreserveTypeQAsymmU8) | |
BOOST_AUTO_TEST_CASE (PreserveTypeQsymm8) | |
BOOST_AUTO_TEST_CASE (PreserveTypeQsymm16) | |
BOOST_AUTO_TEST_CASE (TestConnectionPreservationAfterDynamicQuant) | |
void | RuntimeLoadedNetworksReserve (armnn::RuntimeImpl *runtime) |
std::ostream & | boost_test_print_type (std::ostream &ostr, const TensorInfo &right) |
std::ostream & | boost_test_print_type (std::ostream &ostr, const TensorShape &shape) |
BOOST_AUTO_TEST_CASE (CheckInputLayerVisitorBindingIdAndName) | |
BOOST_AUTO_TEST_CASE (CheckInputLayerVisitorBindingIdAndNameNull) | |
BOOST_AUTO_TEST_CASE (CheckOutputLayerVisitorBindingIdAndName) | |
BOOST_AUTO_TEST_CASE (CheckOutputLayerVisitorBindingIdAndNameNull) | |
void | CheckLayerBindingId (LayerBindingId visitorId, LayerBindingId id) |
Graph & | GetGraphForTesting (IOptimizedNetwork *optNet) |
ModelOptions & | GetModelOptionsForTesting (IOptimizedNetwork *optNet) |
profiling::ProfilingService & | GetProfilingService (armnn::RuntimeImpl *runtime) |
std::ostream & | operator<< (std::ostream &os, const BFloat16 &b) |
void | ReportUntouchedLayers (OptimizationViews &optimizationViews, std::map< LayerGuid, Layer *> untouched) |
template<typename LayerType > | |
LayerType * | FuseLayerWithoutParameters (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseLayerWithParameters (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
template<typename LayerType > | |
LayerType * | FuseLayerWithWeightsAndBiases (OptimizationViews &optimizationViews, LayerType *baseLayer, ActivationLayer *activationLayer, ActivationDescriptor &activationDesc, std::string name) |
arm_compute::NormalizationLayerInfo | CreateAclNormalizationLayerInfoForL2Normalization (const armnn::TensorInfo &tensorInfo, armnn::DataLayout dataLayout) |
arm_compute::ActivationLayerInfo::ActivationFunction | ConvertActivationFunctionToAclActivationFunction (ActivationFunction armnnFunction) |
arm_compute::ActivationLayerInfo | ConvertActivationDescriptorToAclActivationLayerInfo (const ActivationDescriptor &actDesc) |
arm_compute::ActivationLayerInfo | ConvertActivationDescriptorToAclActivationLayerInfo (const ActivationDescriptor *activationDescPtr) |
arm_compute::ActivationLayerInfo | ConvertAdditionalInfoToAclActivationLayerInfo (const QueueDescriptor &queueDescriptor) |
arm_compute::ComparisonOperation | ConvertComparisonOperationToAcl (const ComparisonDescriptor &descriptor) |
arm_compute::PoolingType | ConvertPoolingAlgorithmToAclPoolingType (PoolingAlgorithm poolingAlgorithm) |
arm_compute::DimensionRoundingType | ConvertOutputShapeRoundingToAclDimensionRoundingType (OutputShapeRounding rounding) |
arm_compute::NormType | ConvertNormalizationAlgorithmChannelToAclNormType (NormalizationAlgorithmChannel channelType) |
arm_compute::FullyConnectedLayerInfo | ConvertFullyConnectedDescriptorToAclFullyConnectedLayerInfo (const FullyConnectedDescriptor &fullyConnectedDesc, const ActivationDescriptor *activationDesc) |
arm_compute::FullyConnectedLayerInfo | ConvertFullyConnectedDescriptorToAclFullyConnectedLayerInfo (const FullyConnectedDescriptor &fullyConnectedDesc, arm_compute::ActivationLayerInfo activationLayerInfo) |
arm_compute::InterpolationPolicy | ConvertResizeMethodToAclInterpolationPolicy (ResizeMethod resizeMethod) |
template<typename T > | |
T | ComputeSoftmaxAclAxis (const SoftmaxDescriptor &softmaxDesc, const armnn::TensorInfo &tensor) |
std::set< unsigned int > | ComputeSplitAxis (const armnn::SplitterDescriptor &desc, const TensorShape &input) |
int | ComputeAclAxis (const int &armnnAxis, const armnn::TensorInfo &tensor) |
Function to convert ArmNN axis (left to right) to ACL axis (right to left) ranging from [-rank, rank) More... | |
unsigned int | ComputePositiveAxis (const int &axis, const armnn::TensorInfo &tensor) |
Function to convert axis to its positive equivalent value. More... | |
arm_compute::ReductionOperation | ConvertReductionOperationToAcl (const ReduceDescriptor &descriptor) |
TensorShape | GetUnpaddedTensorStrides (const TensorInfo &tensorInfo) |
armnn::Optional< armnn::DataType > | GetBiasTypeFromWeightsType (armnn::Optional< armnn::DataType > weightsType) |
template<typename F > | |
bool | CheckSupportRule (F rule, Optional< std::string &> reasonIfUnsupported, const char *reason) |
template<typename T > | |
bool | AllTypesAreEqualImpl (T) |
template<typename T , typename... Rest> | |
bool | AllTypesAreEqualImpl (T t1, T t2, Rest... rest) |
constexpr const char * | MockImportBackendId () |
constexpr const char * | MockBackendId () |
DataType | GetBiasDataType (DataType inputDataType) |
armnn::ConstTensor | PermuteTensor (const ConstCpuTensorHandle *tensor, const PermutationVector &permutationVector, void *permuteBuffer) |
void | ReshapeWeightsForAcl (TensorInfo &weightInfo, DataLayout dataLayout) |
template<typename DataType > | |
ConstTensor | ReorderWeightChannelsForAcl (const ConstTensor &weightHandle, DataLayout dataLayout, void *permuteBuffer) |
TensorInfo | ConvertWeightTensorInfoFromArmnnToAcl (const TensorInfo &weightInfo, DataLayout dataLayout) |
armnn::ConstTensor | ConvertWeightTensorFromArmnnToAcl (const ConstCpuTensorHandle *weightTensor, DataLayout dataLayout, void *permuteBuffer) |
int32_t | ConvertMaskToACLFormat (int32_t mask, int32_t numDim) |
template<typename CopyFunc > | |
void | CopyTensorContentsGeneric (const ITensorHandle *srcTensor, ITensorHandle *dstTensor, CopyFunc copy) |
template<typename SrcTensorHandleType , typename DstTensorHandleType , typename DescriptorType > | |
void | GatherTensorHandlePairs (const DescriptorType &descriptor, std::vector< std::pair< SrcTensorHandleType *, DstTensorHandleType *>> &tensorHandlePairs) |
std::string | LowerString (std::string value) |
TuningLevel | ParseTuningLevel (const BackendOptions::Var &value, TuningLevel defaultValue) |
bool | ParseBoolean (const BackendOptions::Var &value, bool defaultValue) |
std::string | ParseFile (const BackendOptions::Var &value, std::string defaultValue) |
void | ConfigureTuner (arm_compute::CLTuner &tuner, TuningLevel level) |
constexpr const char * | ClBackendId () |
flatbuffers::Offset< ClContext > | CreateClContext (flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset< flatbuffers::Vector< flatbuffers::Offset< armnn::Program >>> programs=0) |
flatbuffers::Offset< ClContext > | CreateClContextDirect (flatbuffers::FlatBufferBuilder &_fbb, const std::vector< flatbuffers::Offset< armnn::Program >> *programs=nullptr) |
flatbuffers::Offset< Program > | CreateProgram (flatbuffers::FlatBufferBuilder &_fbb, flatbuffers::Offset< flatbuffers::String > name=0, flatbuffers::Offset< flatbuffers::Vector< uint8_t >> binary=0) |
flatbuffers::Offset< Program > | CreateProgramDirect (flatbuffers::FlatBufferBuilder &_fbb, const char *name=nullptr, const std::vector< uint8_t > *binary=nullptr) |
const armnn::ClContext * | GetClContext (const void *buf) |
const armnn::ClContext * | GetSizePrefixedClContext (const void *buf) |
const char * | ClContextIdentifier () |
bool | ClContextBufferHasIdentifier (const void *buf) |
bool | VerifyClContextBuffer (flatbuffers::Verifier &verifier) |
bool | VerifySizePrefixedClContextBuffer (flatbuffers::Verifier &verifier) |
const char * | ClContextExtension () |
void | FinishClContextBuffer (flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset< armnn::ClContext > root) |
void | FinishSizePrefixedClContextBuffer (flatbuffers::FlatBufferBuilder &fbb, flatbuffers::Offset< armnn::ClContext > root) |
constexpr const char * | ClTensorHandleFactoryId () |
arm_compute::Status | ClAbsWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClActivationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ActivationDescriptor &descriptor) |
arm_compute::Status | ClAdditionValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClArgMinMaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ArgMinMaxDescriptor &descriptor) |
arm_compute::Status | ClBatchNormalizationValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &mean, const TensorInfo &var, const TensorInfo &beta, const TensorInfo &gamma, const BatchNormalizationDescriptor &desc, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClBatchToSpaceNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const BatchToSpaceNdDescriptor &desc) |
arm_compute::Status | ClComparisonWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ComparisonDescriptor &descriptor) |
arm_compute::Status | ClConcatWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const OriginsDescriptor &descriptor) |
arm_compute::Status | ClConstantWorkloadValidate (const TensorInfo &output) |
arm_compute::Status | ClConvertFp16ToFp32WorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClConvertFp32ToFp16WorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Convolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, bool isFastMathEnabled, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClDepthToSpaceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthToSpaceDescriptor &desc) |
arm_compute::Status | ClDepthwiseConvolutionWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthwiseConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClDequantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClDivisionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClExpWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClFloorWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClFullyConnectedWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &weights, const TensorInfo &biases, const FullyConnectedDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClGatherWorkloadValidate (const TensorInfo &input, const TensorInfo &indices, const TensorInfo &output, const GatherDescriptor &descriptor) |
arm_compute::Status | ClInstanceNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const InstanceNormalizationDescriptor &descriptor) |
arm_compute::Status | ClL2NormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const L2NormalizationDescriptor &descriptor) |
arm_compute::Status | ClLogicalAndWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClLogicalNotWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClLogicalOrWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClLogSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const LogSoftmaxDescriptor &descriptor) |
arm_compute::Status | ClLstmFloatWorkloadValidate (const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &scratchBuffer, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const LstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | ClMaximumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClMeanValidate (const TensorInfo &input, const TensorInfo &output, const MeanDescriptor &desc) |
arm_compute::Status | ClMinimumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | ClMultiplicationWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClNegWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const NormalizationDescriptor &descriptor) |
arm_compute::Status | ClPadValidate (const TensorInfo &input, const TensorInfo &output, const PadDescriptor &descriptor) |
arm_compute::Status | ClPermuteWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const PermuteDescriptor &descriptor) |
arm_compute::Status | ClPooling2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Pooling2dDescriptor &descriptor) |
arm_compute::Status | ClPreluWorkloadValidate (const TensorInfo &input, const TensorInfo &alpha, const TensorInfo &output) |
arm_compute::Status | ClQLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &cellStateIn, const TensorInfo &outputStateIn, const TensorInfo &cellStateOut, const TensorInfo &outputStateOut, const TensorInfo &output, const QLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | ClQuantizedLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &previousCellStateIn, const TensorInfo &previousOutputIn, const TensorInfo &cellStateOut, const TensorInfo &output, const QuantizedLstmInputParamsInfo ¶msInfo) |
arm_compute::Status | ClQuantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClReduceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ReduceDescriptor &desc) |
arm_compute::Status | ClReshapeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClResizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ResizeDescriptor &descriptor) |
arm_compute::Status | ClRsqrtWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | ClSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SliceDescriptor &descriptor) |
arm_compute::Status | ClSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SoftmaxDescriptor &descriptor) |
arm_compute::Status | ClSpaceToBatchNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToBatchNdDescriptor &descriptor) |
arm_compute::Status | ClSpaceToDepthWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToDepthDescriptor &desc) |
arm_compute::Status | ClSplitterWorkloadValidate (const TensorInfo &input, const std::vector< std::reference_wrapper< TensorInfo >> &outputs, unsigned int splitAxis) |
arm_compute::Status | ClStackWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const StackDescriptor &descriptor) |
arm_compute::Status | ClStridedSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const StridedSliceDescriptor &descriptor) |
arm_compute::Status | ClSubtractionValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | ClTransposeConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases) |
arm_compute::Status | ClTransposeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeDescriptor &descriptor) |
template<typename T > | |
void | CopyArmComputeClTensorData (arm_compute::CLTensor &dstTensor, const T *srcData) |
auto | SetClStridedSliceData (const std::vector< int > &m_begin, const std::vector< int > &m_end, const std::vector< int > &m_stride) |
auto | SetClSliceData (const std::vector< unsigned int > &m_begin, const std::vector< unsigned int > &m_size) |
void | InitializeArmComputeClTensorData (arm_compute::CLTensor &clTensor, const ConstCpuTensorHandle *handle) |
RuntimeException | WrapClError (const cl::Error &clError, const CheckLocation &location) |
void | RunClFunction (arm_compute::IFunction &function, const CheckLocation &location) |
template<typename DataType , typename PayloadType > | |
DataType * | GetOutputTensorData (unsigned int idx, const PayloadType &data) |
constexpr const char * | NeonBackendId () |
constexpr const char * | NeonTensorHandleFactoryId () |
arm_compute::Status | NeonAbsWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonActivationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ActivationDescriptor &descriptor) |
arm_compute::Status | NeonAdditionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonArgMinMaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ArgMinMaxDescriptor &descriptor) |
arm_compute::Status | NeonBatchNormalizationValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &mean, const TensorInfo &var, const TensorInfo &beta, const TensorInfo &gamma, const BatchNormalizationDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonBatchToSpaceNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const BatchToSpaceNdDescriptor &desc) |
arm_compute::Status | NeonComparisonWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ComparisonDescriptor &descriptor) |
arm_compute::Status | NeonConcatWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const OriginsDescriptor &descriptor) |
arm_compute::Status | NeonConstantWorkloadValidate (const TensorInfo &output) |
arm_compute::Status | NeonConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Convolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, bool isFastMathEnabled, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonDepthToSpaceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthToSpaceDescriptor &descriptor) |
arm_compute::Status | NeonDepthwiseConvolutionWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const DepthwiseConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonDequantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::DetectionPostProcessLayerInfo | MakeInfo (const DetectionPostProcessDescriptor &desc) |
arm_compute::Status | NeonDetectionPostProcessValidate (const TensorInfo &boxEncodings, const TensorInfo &scores, const TensorInfo &anchors, const TensorInfo &detectionBoxes, const TensorInfo &detectionClasses, const TensorInfo &detectionScores, const TensorInfo &numDetections, const DetectionPostProcessDescriptor &desc) |
arm_compute::Status | NeonDivisionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonExpWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonFullyConnectedWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TensorInfo &weights, const TensorInfo &biases, const FullyConnectedDescriptor &descriptor, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonGatherWorkloadValidate (const TensorInfo &input, const TensorInfo &indices, const TensorInfo &output, const GatherDescriptor &descriptor) |
arm_compute::Status | NeonInstanceNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const InstanceNormalizationDescriptor &descriptor) |
arm_compute::Status | NeonL2NormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const L2NormalizationDescriptor &descriptor) |
arm_compute::Status | NeonLogicalAndWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | NeonLogicalNotWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonLogicalOrWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | NeonLogSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const LogSoftmaxDescriptor &descriptor) |
arm_compute::Status | NeonLstmFloatWorkloadValidate (const TensorInfo &input, const TensorInfo &outputStateIn, const TensorInfo &cellStateIn, const TensorInfo &scratchBuffer, const TensorInfo &outputStateOut, const TensorInfo &cellStateOut, const TensorInfo &output, const LstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonMaximumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
arm_compute::Status | NeonMeanWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const MeanDescriptor &desc) |
arm_compute::Status | NeonMinimumWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output) |
Validate function for validating the inputs and output. More... | |
arm_compute::Status | NeonMultiplicationWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonNegWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonNormalizationWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const NormalizationDescriptor &descriptor) |
arm_compute::Status | NeonPadWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const PadDescriptor &descriptor) |
arm_compute::Status | NeonPermuteWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const PermuteDescriptor &descriptor) |
arm_compute::Status | NeonPooling2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const Pooling2dDescriptor &descriptor) |
arm_compute::Status | NeonPreluWorkloadValidate (const TensorInfo &input, const TensorInfo &alpha, const TensorInfo &output) |
arm_compute::Status | NeonQLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &cellStateIn, const TensorInfo &outputStateIn, const TensorInfo &cellStateOut, const TensorInfo &outputStateOut, const TensorInfo &output, const QLstmDescriptor &descriptor, const LstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonQuantizedLstmWorkloadValidate (const TensorInfo &input, const TensorInfo &cellStateIn, const TensorInfo &outputStateIn, const TensorInfo &cellStateOut, const TensorInfo &outputStateOut, const QuantizedLstmInputParamsInfo ¶msInfo) |
arm_compute::Status | NeonQuantizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonReduceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ReduceDescriptor &desc) |
arm_compute::Status | NeonReshapeWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonResizeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const ResizeDescriptor &descriptor) |
arm_compute::Status | NeonRsqrtWorkloadValidate (const TensorInfo &input, const TensorInfo &output) |
arm_compute::Status | NeonSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SliceDescriptor &descriptor) |
arm_compute::Status | NeonSoftmaxWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SoftmaxDescriptor &descriptor) |
arm_compute::Status | NeonSpaceToBatchNdWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToBatchNdDescriptor &descriptor) |
arm_compute::Status | NeonSpaceToDepthWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const SpaceToDepthDescriptor &descriptor) |
arm_compute::Status | NeonSplitterWorkloadValidate (const TensorInfo &input, const std::vector< std::reference_wrapper< TensorInfo >> &outputs, unsigned int splitAxis) |
arm_compute::Status | NeonStackWorkloadValidate (const std::vector< const TensorInfo *> &inputs, const TensorInfo &output, const StackDescriptor &descriptor) |
arm_compute::Status | NeonStridedSliceWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const StridedSliceDescriptor &descriptor) |
arm_compute::Status | NeonSubtractionWorkloadValidate (const TensorInfo &input0, const TensorInfo &input1, const TensorInfo &output, const ActivationDescriptor *activationDescriptor) |
arm_compute::Status | NeonTransposeConvolution2dWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeConvolution2dDescriptor &descriptor, const TensorInfo &weights, const Optional< TensorInfo > &biases) |
arm_compute::Status | NeonTransposeWorkloadValidate (const TensorInfo &input, const TensorInfo &output, const TransposeDescriptor &descriptor) |
template<typename T > | |
void | CopyArmComputeTensorData (arm_compute::Tensor &dstTensor, const T *srcData) |
void | InitializeArmComputeTensorData (arm_compute::Tensor &tensor, const ConstCpuTensorHandle *handle) |
auto | SetNeonStridedSliceData (const std::vector< int > &m_begin, const std::vector< int > &m_end, const std::vector< int > &m_stride) |
auto | SetNeonSliceData (const std::vector< unsigned int > &m_begin, const std::vector< unsigned int > &m_size) |
constexpr const char * | RefBackendId () |
constexpr const char * | RefTensorHandleFactoryId () |
template<DataType ArmnnType> | |
bool | IsDataType (const WorkloadInfo &info) |
bool | IsSigned32 (const WorkloadInfo &info) |
bool | IsBFloat16 (const WorkloadInfo &info) |
bool | IsFloat16 (const WorkloadInfo &info) |
bool | IsQSymmS16 (const WorkloadInfo &info) |
bool | IsQSymmS8 (const WorkloadInfo &info) |
bool | IsQAsymmS8 (const WorkloadInfo &info) |
bool | IsQAsymmU8 (const WorkloadInfo &info) |
template<typename QueueDescriptorType > | |
constexpr bool | IsOperationQueueDescriptor (const QueueDescriptorType &) |
template<> | |
constexpr bool | IsOperationQueueDescriptor (const MemCopyQueueDescriptor &) |
template<> | |
constexpr bool | IsOperationQueueDescriptor (const ConstantQueueDescriptor &) |
template<> | |
constexpr bool | IsOperationQueueDescriptor (const PermuteQueueDescriptor &) |
float | Activation (float in, ActivationFunction function, float a, float b) |
void | Activation (Decoder< float > &in, Encoder< float > &out, const TensorInfo &tensorInfo, ActivationFunction function, float a, float b) |
template<typename OUT > | |
void | ArgMinMax (Decoder< float > &in, OUT *out, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, ArgMinMaxFunction function, int axis) |
template void | ArgMinMax (Decoder< float > &in, int32_t *out, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, ArgMinMaxFunction function, int axis) |
template void | ArgMinMax (Decoder< float > &in, int64_t *out, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, ArgMinMaxFunction function, int axis) |
void | BatchNormImpl (const BatchNormalizationQueueDescriptor &data, Decoder< float > &meanDecoder, Decoder< float > &varianceDecoder, Decoder< float > &betaDecoder, Decoder< float > &gammaDecoder, Decoder< float > &inputDecoder, Encoder< float > &outputEncoder) |
unsigned int | Offset (const TensorShape &shape, unsigned int batch, unsigned int height, unsigned int width, unsigned int channels, const DataLayoutIndexed &dataLayout) |
void | BatchToSpaceNd (const DataLayoutIndexed &dataLayout, const TensorInfo &inputTensorInfo, const TensorInfo &outputTensorInfo, const std::vector< unsigned int > &blockShape, const std::vector< std::pair< unsigned int, unsigned int >> &cropsData, Decoder< float > &inputDecoder, Encoder< float > &outputEncoder) |
void | Concatenate (const ConcatQueueDescriptor &data) |
void | Convolve (const TensorShape &rInputShape, Decoder< float > &rInputDecoder, const TensorShape &rOutputShape, Encoder< float > &rOutputEncoder, const TensorShape &rFilterShape, Decoder< float > &rFilterDecoder, bool biasEnabled, Decoder< float > *pBiasDecoder, DataLayout dataLayout, unsigned int paddingTop, unsigned int paddingLeft, unsigned int xStride, unsigned int yStride, unsigned int xDilation, unsigned int yDilation, bool depthwise) |
template<typename T > | |
void | Debug (const TensorInfo &inputInfo, const T *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< BFloat16 > (const TensorInfo &inputInfo, const BFloat16 *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< Half > (const TensorInfo &inputInfo, const Half *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< float > (const TensorInfo &inputInfo, const float *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< uint8_t > (const TensorInfo &inputInfo, const uint8_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< int8_t > (const TensorInfo &inputInfo, const int8_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< int16_t > (const TensorInfo &inputInfo, const int16_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template void | Debug< int32_t > (const TensorInfo &inputInfo, const int32_t *inputData, LayerGuid guid, const std::string &layerName, unsigned int slotIndex) |
template<typename T > | |
std::unique_ptr< Decoder< T > > | MakeDecoder (const TensorInfo &info, const void *data=nullptr) |
template<> | |
std::unique_ptr< Decoder< float > > | MakeDecoder (const TensorInfo &info, const void *data) |
template<> | |
std::unique_ptr< Decoder< bool > > | MakeDecoder (const TensorInfo &info, const void *data) |
template<> | |
std::unique_ptr< Decoder< int32_t > > | MakeDecoder (const TensorInfo &info, const void *data) |
void | DepthToSpace (const TensorInfo &inputInfo, const DepthToSpaceDescriptor &descriptor, const void *inputData, void *outputData, unsigned int dataTypeSize) |
void | Dequantize (Decoder< float > &inputDecoder, Encoder< float > &outputEncoder, const TensorInfo &inputInfo, const TensorInfo &outputInfo) |
std::vector< unsigned int > | GenerateRangeK (unsigned int k) |
void | TopKSort (unsigned int k, unsigned int *indices, const float *values, unsigned int numElement) |
float | IntersectionOverUnion (const float *boxI, const float *boxJ) |
std::vector< unsigned int > | NonMaxSuppression (unsigned int numBoxes, const std::vector< float > &boxCorners, const std::vector< float > &scores, float nmsScoreThreshold, unsigned int maxDetection, float nmsIouThreshold) |
void | AllocateOutputData (unsigned int numOutput, unsigned int numSelected, const std::vector< float > &boxCorners, const std::vector< unsigned int > &outputIndices, const std::vector< unsigned int > &selectedBoxes, const std::vector< unsigned int > &selectedClasses, const std::vector< float > &selectedScores, float *detectionBoxes, float *detectionScores, float *detectionClasses, float *numDetections) |
void | DetectionPostProcess (const TensorInfo &boxEncodingsInfo, const TensorInfo &scoresInfo, const TensorInfo &anchorsInfo, const TensorInfo &detectionBoxesInfo, const TensorInfo &detectionClassesInfo, const TensorInfo &detectionScoresInfo, const TensorInfo &numDetectionsInfo, const DetectionPostProcessDescriptor &desc, Decoder< float > &boxEncodings, Decoder< float > &scores, Decoder< float > &anchors, float *detectionBoxes, float *detectionClasses, float *detectionScores, float *numDetections) |
template<typename T > | |
std::unique_ptr< Encoder< T > > | MakeEncoder (const TensorInfo &info, void *data=nullptr) |
template<> | |
std::unique_ptr< Encoder< float > > | MakeEncoder (const TensorInfo &info, void *data) |
template<> | |
std::unique_ptr< Encoder< bool > > | MakeEncoder (const TensorInfo &info, void *data) |
template<> | |
std::unique_ptr< Encoder< int32_t > > | MakeEncoder (const TensorInfo &info, void *data) |
void | Fill (Encoder< float > &output, const TensorShape &desiredOutputShape, const float value) |
Creates a tensor and fills it with a scalar value. More... | |
void | FullyConnected (const TensorShape &rInputShape, Decoder< float > &rInputDecoder, const TensorShape &rOutputShape, Encoder< float > &rOutputEncoder, const TensorShape &rWeightsShape, Decoder< float > &rWeightDecoder, Decoder< float > &rBiasDecoder, bool biasEnabled, unsigned int K, bool transposeWeights) |
Performs a matrix multiplication and optionally adds a bias. More... | |
void | Gather (const TensorInfo ¶msInfo, const TensorInfo &indicesInfo, const TensorInfo &outputInfo, Decoder< float > ¶ms, const int32_t *indices, Encoder< float > &output, const int32_t axis) |
void | InstanceNorm (const InstanceNormalizationQueueDescriptor &data, Decoder< float > &inputDecoder, Encoder< float > &outputEncoder) |
void | LogSoftmax (Decoder< float > &input, Encoder< float > &output, const TensorInfo &inputInfo, const LogSoftmaxDescriptor &descriptor) |
void | Pad (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const PadQueueDescriptor &data) |
void | Pooling2d (Decoder< float > &rInputDecoder, Encoder< float > &rOutputEncoder, const TensorInfo &inputInfo, const TensorInfo &outputInfo, const Pooling2dDescriptor ¶ms) |
Computes the Pooling2d operation. More... | |
void | PreluImpl (const PreluQueueDescriptor &data, Decoder< float > &inputData, Decoder< float > &alphaData, Encoder< float > &outputData) |
bool | NextIndex (const unsigned int numDims, const armnn::TensorShape &dims, std::vector< unsigned int > ¤t) |
unsigned int | ReducedOutputOffset (const unsigned int numDims, const armnn::TensorShape &dims, std::vector< unsigned int > &index, const unsigned int numAxis, const std::vector< unsigned int > &axis) |
void | Reduce (const TensorInfo &inputInfo, const TensorInfo &outputInfo, Decoder< float > &input, Encoder< float > &output, const std::vector< uint32_t > axis, const ReduceOperation reduceOperation) |
void | FakeQuantization (const float *inputData, float *outputData, uint32_t numElements, float min, float max) |
const TensorInfo & | GetTensorInfo (const ITensorHandle *tensorHandle) |
float32 helpers More... | |
template<typename DataType , typename PayloadType > | |
const DataType * | GetInputTensorData (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
const float * | GetInputTensorDataFloat (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
float * | GetOutputTensorDataFloat (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
const Half * | GetInputTensorDataHalf (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
Half * | GetOutputTensorDataHalf (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
const BFloat16 * | GetInputTensorDataBFloat16 (unsigned int idx, const PayloadType &data) |
template<typename PayloadType > | |
BFloat16 * | GetOutputTensorDataBFloat16 (unsigned int idx, const PayloadType &data) |
template<typename T > | |
std::vector< float > | Dequantize (const T *quant, const TensorInfo &info) |
u8 helpers More... | |
template<typename T > | |
void | Dequantize (const T *inputData, float *outputData, const TensorInfo &info) |
void | Quantize (uint8_t *quant, const float *dequant, const TensorInfo &info) |
void | Resize (Decoder< float > &in, const TensorInfo &inputInfo, Encoder< float > &out, const TensorInfo &outputInfo, DataLayoutIndexed dataLayout, armnn::ResizeMethod resizeMethod, bool alignCorners, bool halfPixelCenters) |
void | Slice (const TensorInfo &inputInfo, const SliceDescriptor &descriptor, const void *inputData, void *outputData, unsigned int dataTypeSize) |
void | Softmax (Decoder< float > &in, Encoder< float > &out, const TensorInfo &inputTensorInfo, float beta, int axis) |
Computes the softmax function on some inputs, into outputs, with a shape given by tensorInfo. More... | |
unsigned int | GetOffset (const TensorShape &shape, unsigned int b, unsigned int h, unsigned int w, unsigned int c, const DataLayoutIndexed &dataLayout) |
void | SpaceToBatchNd (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const SpaceToBatchNdDescriptor ¶ms, Decoder< float > &inputData, Encoder< float > &outputData) |
void | SpaceToDepth (const TensorInfo &inputInfo, const TensorInfo &outputInfo, const SpaceToDepthDescriptor ¶ms, Decoder< float > &inputData, Encoder< float > &outputData) |
void | Split (const SplitterQueueDescriptor &data) |
template<typename DataType > | |
void | Splitter (const SplitterQueueDescriptor &data) |
void | Stack (const StackQueueDescriptor &data, std::vector< std::unique_ptr< Decoder< float >>> &inputs, Encoder< float > &output) |
void | StridedSlice (const TensorInfo &inputInfo, const StridedSliceDescriptor ¶ms, const void *inputData, void *outputData, unsigned int dataTypeSize) |
void | TransposeConvolution2dImpl (const TransposeConvolution2dDescriptor &descriptor, const TensorShape &inputShape, Decoder< float > &inputDecoder, const TensorShape &outputShape, Encoder< float > &outputEncoder, const TensorShape &weightsShape, Decoder< float > &weightsDecoder, Decoder< float > *biasesDecoder) |
std::istream & | operator>> (std::istream &in, armnn::Compute &compute) |
std::istream & | operator>> (std::istream &in, armnn::BackendId &backend) |
Variables | |
constexpr unsigned int | MaxNumOfTensorDimensions = 5U |
constexpr unsigned int | LOWEST_CAPTURE_PERIOD = 10000u |
The lowest performance data capture interval we support is 10 miliseconds. More... | |
constexpr std::size_t | g_ProfilingEventCountHint = 1024 |
constexpr bool | g_WriteProfilingEventSequence = true |
constexpr bool | g_AggregateProfilingEventsByInference = true |
constexpr bool | g_WriteReportToStdOutOnProfilerDestruction = false |
thread_local IProfiler * | tl_Profiler = nullptr |
const float | g_AsymmU8QuantizationBase = 255.0f |
const float | g_AsymmS8QuantizationBase = 255.0f |
const float | g_SymmS8QuantizationBase = 127.0f |
const float | g_SymmS16QuantizationBase = 32767.0f |
const float | g_TestTolerance = 0.000001f |
const std::set< armnn::LayerType > | paddingRequiredLayers |
Copyright (c) 2021 ARM Limited and Contributors.
Optional is a drop in replacement for std::optional until we migrate to c++-17.
Create pages for each tool so they appear nicely in the doxygen tree-view.
All rights reserved.
SPDX-License-Identifier: MIT
Subpages are not listed there.
Note: The parser, serializer and deserializer pages are created in 01_parsers.dox or 02_deserializer_serializer.dox
Subpages are not listed there. Also we can overwrite the page name this way.
Only a subset of the optional features are implemented that we intend to use in ArmNN. There are two distinct implementations here:
1, for normal constructable/destructable types and reference types 2, for reference types The std::optional features we support are:
using AdditionalInfoObjectPtr = std::shared_ptr<void> |
using BackendIdSet = std::unordered_set<BackendId> |
Definition at line 191 of file BackendId.hpp.
using BackendIdVector = std::vector<BackendId> |
Definition at line 190 of file BackendId.hpp.
using BackendsMap = std::map<BackendId, std::unique_ptr<class IBackendInternal> > |
Definition at line 310 of file Network.hpp.
using BaseFloat32ComparisonWorkload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float32, armnn::DataType::Boolean> |
Definition at line 172 of file Workload.hpp.
using BaseUint8ComparisonWorkload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::QAsymmU8, armnn::DataType::Boolean> |
Definition at line 177 of file Workload.hpp.
using BFloat16ToFloat32Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::BFloat16, armnn::DataType::Float32> |
Definition at line 182 of file Workload.hpp.
using BindingPointInfo = std::pair<armnn::LayerBindingId, armnn::TensorInfo> |
Definition at line 261 of file Tensor.hpp.
Definition at line 167 of file Workload.hpp.
using CompiledBlobDeleter = std::function<void(const void*)> |
Definition at line 17 of file ISubgraphViewConverter.hpp.
using CompiledBlobPtr = std::unique_ptr<void, CompiledBlobDeleter> |
Definition at line 18 of file ISubgraphViewConverter.hpp.
using ConcatDescriptor = OriginsDescriptor |
Definition at line 52 of file DescriptorsFwd.hpp.
using Coordinates = std::array<unsigned int, MaxNumOfTensorDimensions> |
Definition at line 14 of file InternalTypes.hpp.
using DebugCallbackFunction = std::function<void(LayerGuid guid, unsigned int slotIndex, ITensorHandle* tensorHandle)> |
Define the type of callback for the Debug layer to call.
guid | - guid of layer connected to the input of the Debug layer |
slotIndex | - index of the output slot connected to the input of the Debug layer |
tensorHandle | - TensorHandle for the input tensor to the Debug layer |
A DepthToSpaceDescriptor for the DepthToSpaceLayer.
Definition at line 908 of file Descriptors.hpp.
using Dimensions = std::array<unsigned int, MaxNumOfTensorDimensions> |
Definition at line 15 of file InternalTypes.hpp.
using DynamicBackendPtr = std::unique_ptr<DynamicBackend> |
Definition at line 52 of file DynamicBackend.hpp.
Definition at line 21 of file ClTensorHandleFactory.cpp.
using Float16ToFloat32Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float16, armnn::DataType::Float32> |
Definition at line 192 of file Workload.hpp.
using Float32ToBFloat16Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float32, armnn::DataType::BFloat16> |
Definition at line 187 of file Workload.hpp.
using Float32ToFloat16Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::Float32, armnn::DataType::Float16> |
Definition at line 197 of file Workload.hpp.
Definition at line 158 of file Workload.hpp.
using FloatWorkload = TypedWorkload<QueueDescriptor, armnn::DataType::Float16, armnn::DataType::Float32> |
Definition at line 155 of file Workload.hpp.
using IBackendContextUniquePtr = std::unique_ptr<IBackendContext> |
Definition at line 31 of file IBackendContext.hpp.
typedef std::unique_ptr< IBackendInternal > IBackendInternalUniquePtr |
Definition at line 23 of file BackendRegistry.hpp.
using IBackendSharedPtr = std::shared_ptr<IBackend> |
using IBackendUniquePtr = std::unique_ptr<IBackend, void(*)(IBackend* backend)> |
using IGpuAccTunedParametersPtr = std::shared_ptr<IGpuAccTunedParameters> |
The following API is replaced by the backend options API.
Definition at line 182 of file IRuntime.hpp.
using ILayerSupportSharedPtr = std::shared_ptr<ILayerSupport> |
Definition at line 421 of file ILayerSupport.hpp.
using IMemoryManagerUniquePtr = std::unique_ptr<IMemoryManager> |
Definition at line 24 of file IMemoryManager.hpp.
using INetworkPtr = std::unique_ptr<INetwork, void(*)(INetwork* network)> |
Definition at line 173 of file INetwork.hpp.
using INetworkQuantizerPtr = std::unique_ptr<class INetworkQuantizer, void(*)(INetworkQuantizer* quantizer)> |
Definition at line 29 of file INetworkQuantizer.hpp.
Definition at line 81 of file WorkloadData.hpp.
using InputTensors = std::vector<std::pair<LayerBindingId, class ConstTensor> > |
Definition at line 340 of file Tensor.hpp.
using instead = SubgraphView |
Definition at line 105 of file SubgraphView.hpp.
Definition at line 164 of file Workload.hpp.
using IOptimizedNetworkPtr = std::unique_ptr<IOptimizedNetwork, void(*)(IOptimizedNetwork* network)> |
Definition at line 174 of file INetwork.hpp.
Definition at line 28 of file Runtime.hpp.
using IRuntimePtr = std::unique_ptr<IRuntime, void(*)(IRuntime* runtime)> |
Definition at line 26 of file IRuntime.hpp.
using LayerBindingId = int |
using LayerGuid = profiling::ProfilingGuid |
using LayerPriority = unsigned int |
using LayerTypeOf = typename LayerTypeOfImpl<Type>::Type |
Definition at line 83 of file LayersFwd.hpp.
using LoadedNetworks = std::unordered_map<NetworkId, std::unique_ptr<LoadedNetwork> > |
Definition at line 27 of file Runtime.hpp.
A LogSoftmaxDescriptor for the LogSoftmaxLayer.
Definition at line 158 of file Descriptors.hpp.
using MemorySourceFlags = unsigned int |
Definition at line 21 of file MemorySources.hpp.
using MergerDescriptor = OriginsDescriptor |
MergerDescriptor is deprecated, use ConcatDescriptor instead.
Definition at line 56 of file DescriptorsFwd.hpp.
Definition at line 139 of file WorkloadData.hpp.
using MinMaxRange = std::pair<float, float> |
Definition at line 25 of file QuantizerTest.cpp.
using MinMaxRangeMap = std::unordered_map<LayerGuid, MinMaxRanges> |
Definition at line 27 of file QuantizerTest.cpp.
using MinMaxRanges = std::vector<MinMaxRange> |
Definition at line 26 of file QuantizerTest.cpp.
using ModelOptions = std::vector<BackendOptions> |
Definition at line 17 of file BackendOptions.hpp.
using NetworkId = int |
Definition at line 20 of file IRuntime.hpp.
using NetworkImplPtr = std::unique_ptr<NetworkImpl, void(*)(NetworkImpl* network)> |
Definition at line 28 of file Network.hpp.
using NetworkOptions = std::vector<BackendOptions> |
Definition at line 15 of file BackendOptions.hpp.
using OffsetScalePair = std::pair<float, int> |
Definition at line 16 of file NetworkQuantizationScheme.hpp.
Definition at line 82 of file WorkloadData.hpp.
using OutputTensors = std::vector<std::pair<LayerBindingId, class Tensor> > |
Definition at line 341 of file Tensor.hpp.
using ParameterStringifyFunction = std::function<void(const std::string& name, const std::string& value)> |
Definition at line 14 of file SerializeLayerParameters.hpp.
using PreCompiledObjectDeleter = std::function<void(const void*)> |
Definition at line 19 of file PreCompiledLayer.hpp.
using PreCompiledObjectPtr = std::unique_ptr<void, PreCompiledObjectDeleter> |
Definition at line 20 of file PreCompiledLayer.hpp.
using RefAdditionWorkload = RefElementwiseWorkload<std::plus<DataType>, AdditionQueueDescriptor, StringMapping::RefAdditionWorkload_Execute> |
Definition at line 42 of file RefElementwiseWorkload.hpp.
Definition at line 40 of file RefDebugWorkload.hpp.
Definition at line 41 of file RefDebugWorkload.hpp.
Definition at line 42 of file RefDebugWorkload.hpp.
Definition at line 44 of file RefDebugWorkload.hpp.
Definition at line 43 of file RefDebugWorkload.hpp.
Definition at line 45 of file RefDebugWorkload.hpp.
Definition at line 46 of file RefDebugWorkload.hpp.
Definition at line 47 of file RefDebugWorkload.hpp.
using RefDivisionWorkload = RefElementwiseWorkload<std::divides<DataType>, DivisionQueueDescriptor, StringMapping::RefDivisionWorkload_Execute> |
Definition at line 60 of file RefElementwiseWorkload.hpp.
using RefMaximumWorkload = RefElementwiseWorkload<armnn::maximum<DataType>, MaximumQueueDescriptor, StringMapping::RefMaximumWorkload_Execute> |
Definition at line 66 of file RefElementwiseWorkload.hpp.
using RefMinimumWorkload = RefElementwiseWorkload<armnn::minimum<DataType>, MinimumQueueDescriptor, StringMapping::RefMinimumWorkload_Execute> |
Definition at line 72 of file RefElementwiseWorkload.hpp.
using RefMultiplicationWorkload = RefElementwiseWorkload<std::multiplies<DataType>, MultiplicationQueueDescriptor, StringMapping::RefMultiplicationWorkload_Execute> |
Definition at line 54 of file RefElementwiseWorkload.hpp.
Definition at line 30 of file RefPermuteWorkload.hpp.
Definition at line 31 of file RefPermuteWorkload.hpp.
Definition at line 32 of file RefPermuteWorkload.hpp.
Definition at line 34 of file RefPermuteWorkload.hpp.
Definition at line 33 of file RefPermuteWorkload.hpp.
Definition at line 35 of file RefPermuteWorkload.hpp.
using RefSubtractionWorkload = RefElementwiseWorkload<std::minus<DataType>, SubtractionQueueDescriptor, StringMapping::RefSubtractionWorkload_Execute> |
Definition at line 48 of file RefElementwiseWorkload.hpp.
Definition at line 30 of file RefTransposeWorkload.hpp.
Definition at line 31 of file RefTransposeWorkload.hpp.
Definition at line 32 of file RefTransposeWorkload.hpp.
Definition at line 34 of file RefTransposeWorkload.hpp.
Definition at line 33 of file RefTransposeWorkload.hpp.
Definition at line 35 of file RefTransposeWorkload.hpp.
using ResolveType = typename ResolveTypeImpl<DT>::Type |
Definition at line 73 of file ResolveType.hpp.
using SplitterDescriptor = ViewsDescriptor |
Definition at line 57 of file DescriptorsFwd.hpp.
using supported = ISubgraphViewConverter |
Definition at line 31 of file ISubgraphViewConverter.hpp.
using TContainer = mapbox::util::variant<std::vector<float>, std::vector<int>, std::vector<unsigned char> > |
Definition at line 34 of file NetworkQuantizer.cpp.
using Uint8ToFloat32Workload = MultiTypedWorkload<QueueDescriptor, armnn::DataType::QAsymmU8, armnn::DataType::Float32> |
Definition at line 202 of file Workload.hpp.
Definition at line 161 of file Workload.hpp.
using WorkloadQueue = std::vector< std::unique_ptr<IWorkload> > |
Definition at line 13 of file ExecutionFrame.hpp.
|
strong |
|
strong |
|
strong |
|
strong |
Capability class to calculate in the GetCapabilities function so that only the capability in the scope can be choose to calculate.
Enumerator | |
---|---|
PaddingRequired | |
CapabilityClassMax |
Definition at line 20 of file ITensorHandleFactory.hpp.
|
strong |
|
strong |
The Compute enum is now deprecated and it is now being replaced by BackendId.
Enumerator | |
---|---|
Undefined | |
CpuRef | CPU Execution: Reference C++ kernels. |
CpuAcc | CPU Execution: NEON: ArmCompute. |
GpuAcc | GPU Execution: OpenCL: ArmCompute. |
Definition at line 21 of file BackendId.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
Float16 | |
Float32 | |
QAsymmU8 | |
Signed32 | |
Boolean | |
QSymmS16 | |
QuantizedSymm8PerAxis | |
QSymmS8 | |
QAsymmS8 | |
BFloat16 | |
Signed64 | |
QuantisedAsymm8 | |
QuantisedSymm16 |
Definition at line 32 of file Types.hpp.
|
strong |
|
strong |
Definition at line 99 of file ITensorHandleFactory.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
Measurement | |
Event |
Definition at line 18 of file JsonPrinter.hpp.
|
strong |
When adding a new layer, adapt also the LastLayer enum value in the enum class LayerType below.
Definition at line 419 of file Types.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
Trace | |
Debug | |
Info | |
Warning | |
Error | |
Fatal |
Definition at line 13 of file Utils.hpp.
|
strong |
|
strong |
|
strong |
|
strong |
|
strong |
The padding method modifies the output of pooling layers.
In both supported methods, the values are ignored (they are not even zeroes, which would make a difference for max pooling a tensor with negative values). The difference between IgnoreValue and Exclude is that the former counts the padding fields in the divisor of Average and L2 pooling, while Exclude does not.
Enumerator | |
---|---|
IgnoreValue | The padding fields count, but are ignored. |
Exclude | The padding fields don't count and are ignored. |
Definition at line 141 of file Types.hpp.
|
strong |
|
strong |
|
strong |
|
strong |
The ShapeInferenceMethod modify how the output shapes are treated.
When ValidateOnly is selected, the output shapes are inferred from the input parameters of the layer and any mismatch is reported. When InferAndValidate is selected 2 actions must be performed: (1)infer output shape from inputs and (2)validate the shapes as in ValidateOnly. This option has been added to work with tensors which rank or dimension sizes are not specified explicitly, however this information can be calculated from the inputs.
Enumerator | |
---|---|
ValidateOnly | Validate all output shapes. |
InferAndValidate | Infer missing output shapes and validate all output shapes. |
Definition at line 177 of file Types.hpp.
|
strong |
|
strong |
Enumerator | |
---|---|
None | |
Rapid | |
Normal | |
Exhaustive |
Definition at line 70 of file ClBackendContext.cpp.
|
strong |
float Activation | ( | float | in, |
ActivationFunction | function, | ||
float | a, | ||
float | b | ||
) |
Definition at line 13 of file Activation.cpp.
References Abs, BoundedReLu, Elu, HardSwish, LeakyReLu, Linear, ReLu, Sigmoid, SoftReLu, Sqrt, Square, and TanH.
Referenced by Activation().
void Activation | ( | Decoder< float > & | in, |
Encoder< float > & | out, | ||
const TensorInfo & | tensorInfo, | ||
ActivationFunction | function, | ||
float | a, | ||
float | b | ||
) |
Definition at line 95 of file Activation.cpp.
References Activation(), Decoder< IType >::Get(), TensorInfo::GetNumElements(), and Encoder< IType >::Set().
void armnn::AllocateOutputData | ( | unsigned int | numOutput, |
unsigned int | numSelected, | ||
const std::vector< float > & | boxCorners, | ||
const std::vector< unsigned int > & | outputIndices, | ||
const std::vector< unsigned int > & | selectedBoxes, | ||
const std::vector< unsigned int > & | selectedClasses, | ||
const std::vector< float > & | selectedScores, | ||
float * | detectionBoxes, | ||
float * | detectionScores, | ||
float * | detectionClasses, | ||
float * | numDetections | ||
) |
Definition at line 102 of file DetectionPostProcess.cpp.
References numeric_cast().
Referenced by DetectionPostProcess().
bool armnn::AllTypesAreEqualImpl | ( | T | ) |
Definition at line 59 of file LayerSupportRules.hpp.
Referenced by AllTypesAreEqualImpl(), and TypesAreEqual::TypesAreEqual().
bool armnn::AllTypesAreEqualImpl | ( | T | t1, |
T | t2, | ||
Rest... | rest | ||
) |
Definition at line 65 of file LayerSupportRules.hpp.
References AllTypesAreEqualImpl().
void armnn::Append | ( | Optimizer::Optimizations & | optimizations, |
T && | optimization | ||
) |
Definition at line 30 of file Optimizer.hpp.
Referenced by Append(), and MakeOptimizations().
void armnn::Append | ( | Optimizer::Optimizations & | optimizations, |
Front && | front, | ||
Others &&... | others | ||
) |
Definition at line 36 of file Optimizer.hpp.
References Append().
OptimizationResult armnn::ApplyBackendOptimizations | ( | OptimizedNetworkImpl * | optNetObjPtr, |
BackendSettings & | backendSettings, | ||
BackendsMap & | backends, | ||
const ModelOptions & | modelOptions, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 1028 of file Network.cpp.
References ARMNN_ASSERT, AssignBackends(), SubgraphView::begin(), SubgraphView::end(), Layer::GetBackendId(), OptimizationViews::GetFailedSubgraphs(), OptimizedNetworkImpl::GetGraph(), OptimizationViews::GetSubstitutions(), Layer::GetType(), Input, OptimizationResult::m_Error, BackendSettings::m_SelectedBackends, Output, ReportWarning(), SubgraphViewSelector::SelectSubgraphs(), Graph::SubstituteSubgraph(), and OptimizationViews::Validate().
Referenced by Optimize().
void armnn::ApplyStrategyToLayers | ( | const LayerContainer & | layerContainer, |
IStrategy & | strategy | ||
) |
Definition at line 61 of file NetworkQuantizerUtils.hpp.
References IStrategy::FinishStrategy().
Referenced by BOOST_AUTO_TEST_CASE(), NetworkQuantizer::ExportNetwork(), NetworkQuantizer::Refine(), and VisitLayersTopologically().
void ArgMinMax | ( | Decoder< float > & | in, |
OUT * | out, | ||
const TensorInfo & | inputTensorInfo, | ||
const TensorInfo & | outputTensorInfo, | ||
ArgMinMaxFunction | function, | ||
int | axis | ||
) |
Definition at line 16 of file ArgMinMax.cpp.
References Decoder< IType >::Get(), TensorInfo::GetNumDimensions(), armnnUtils::GetNumElementsBetween(), TensorInfo::GetShape(), armnnUtils::GetUnsignedAxis(), IgnoreUnused(), Max, Min, and numeric_cast().
Referenced by BOOST_AUTO_TEST_CASE().
template void armnn::ArgMinMax | ( | Decoder< float > & | in, |
int32_t * | out, | ||
const TensorInfo & | inputTensorInfo, | ||
const TensorInfo & | outputTensorInfo, | ||
ArgMinMaxFunction | function, | ||
int | axis | ||
) |
template void armnn::ArgMinMax | ( | Decoder< float > & | in, |
int64_t * | out, | ||
const TensorInfo & | inputTensorInfo, | ||
const TensorInfo & | outputTensorInfo, | ||
ArgMinMaxFunction | function, | ||
int | axis | ||
) |
OptimizationResult AssignBackends | ( | OptimizedNetworkImpl * | optNetObjPtr, |
BackendSettings & | backendSettings, | ||
Graph::Iterator & | firstLayer, | ||
Graph::Iterator & | lastLayer, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 869 of file Network.cpp.
References ARMNN_ASSERT_MSG, AttemptBackendAssignment(), CheckScaleSetOnQuantizedType(), Constant, CpuRef, Float32, BackendSettings::GetAvailablePreferredBackends(), OptimizedNetworkImpl::GetGraph(), BackendSettings::IsBackendSupported(), BackendSettings::IsCpuRefUsed(), OptimizationResult::IsError(), OptimizationResult::IsOk(), OptimizationResult::IsWarningOnly(), OptimizationResult::m_Error, BackendSettings::m_SelectedBackends, MemCopy, Permute, ReportError(), and ReturnWithError().
Referenced by ApplyBackendOptimizations(), AssignBackends(), BOOST_AUTO_TEST_CASE(), and Optimize().
OptimizationResult armnn::AssignBackends | ( | OptimizedNetworkImpl * | optNetObjPtr, |
BackendSettings & | backendSettings, | ||
SubgraphView & | subgraph, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 995 of file Network.cpp.
References AssignBackends(), SubgraphView::begin(), and SubgraphView::end().
void armnn::AssignSplitId | ( | LayerSelectionInfo::LayerInfoContainer & | layerInfos, |
LayerSelectionInfo & | layerInfo | ||
) |
Definition at line 305 of file SubgraphViewSelector.cpp.
References ForEachLayerInput().
Referenced by SubgraphViewSelector::SelectSubgraphs().
OptimizationResult armnn::AttemptBackendAssignment | ( | BackendSettings & | backendSettings, |
Graph & | graph, | ||
Layer * | layer, | ||
BackendId | backend, | ||
DataType | dataTypeIn, | ||
DataType | dataTypeOut, | ||
const std::vector< BackendId > & | availablePreferredBackends, | ||
std::string & | reasonIfUnsupported, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 661 of file Network.cpp.
References BFloat16, ConvertBf16ToFp32, ConvertFp16ToFp32, ConvertFp32ToBf16, ConvertFp32ToFp16, Convolution2d, Float16, Float32, FullyConnected, BackendId::Get(), Layer::GetBackendId(), GetDataTypeName(), GetLayerTypeAsCString(), Layer::GetType(), InsertConvertBf16ToFp32LayersBefore(), InsertConvertFp16ToFp32LayersBefore(), InsertConvertFp32ToBf16LayersAfter(), InsertConvertFp32ToFp16LayersAfter(), IWorkloadFactory::IsLayerSupported(), ReportWarning(), ReturnWithError(), and Layer::SetBackendId().
Referenced by AssignBackends().
BackendRegistry & BackendRegistryInstance | ( | ) |
Definition at line 13 of file BackendRegistry.cpp.
Referenced by InferenceModel< IParser, TDataType >::AddCommandLineOptions(), BOOST_AUTO_TEST_CASE(), CreateBackendObject(), CreateSupportedBackends(), DynamicBackendUtils::DeregisterDynamicBackends(), GetILayerSupportByBackendId(), ProfilingService::GetSendTimelinePacket(), GetSuitableBackendRegistered(), main(), LoadedNetwork::MakeLoadedNetwork(), MockBackendInitialiser::MockBackendInitialiser(), MockImportBackendInitialiser::MockImportBackendInitialiser(), Optimize(), ProgramOptions::ProgramOptions(), DynamicBackendUtils::RegisterDynamicBackends(), RuntimeEmptyTestImpl(), RuntimeImpl::RuntimeImpl(), RuntimeInvalidOverridePathTestImpl(), TestBackendRegistry::TestBackendRegistry(), MockBackendInitialiser::~MockBackendInitialiser(), MockImportBackendInitialiser::~MockImportBackendInitialiser(), RuntimeImpl::~RuntimeImpl(), and TestBackendRegistry::~TestBackendRegistry().
void BatchNormImpl | ( | const BatchNormalizationQueueDescriptor & | data, |
Decoder< float > & | meanDecoder, | ||
Decoder< float > & | varianceDecoder, | ||
Decoder< float > & | betaDecoder, | ||
Decoder< float > & | gammaDecoder, | ||
Decoder< float > & | inputDecoder, | ||
Encoder< float > & | outputEncoder | ||
) |
Definition at line 18 of file BatchNormImpl.cpp.
References Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetIndex(), TensorInfo::GetShape(), GetTensorInfo(), DataLayoutIndexed::GetWidthIndex(), BatchNormalizationDescriptor::m_DataLayout, BatchNormalizationDescriptor::m_Eps, QueueDescriptor::m_Inputs, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and Encoder< IType >::Set().
Referenced by RefBatchNormalizationWorkload::Execute().
void BatchToSpaceNd | ( | const DataLayoutIndexed & | dataLayout, |
const TensorInfo & | inputTensorInfo, | ||
const TensorInfo & | outputTensorInfo, | ||
const std::vector< unsigned int > & | blockShape, | ||
const std::vector< std::pair< unsigned int, unsigned int >> & | cropsData, | ||
Decoder< float > & | inputDecoder, | ||
Encoder< float > & | outputEncoder | ||
) |
Definition at line 35 of file BatchToSpaceNd.cpp.
References ARMNN_ASSERT_MSG, BatchToSpaceNd(), Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), TensorShape::GetNumDimensions(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), Offset(), and Encoder< IType >::Set().
Referenced by BatchToSpaceNd(), and BatchToSpaceNdLayer::BatchToSpaceNdLayer().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckInputLayerVisitorBindingIdAndName | ) |
Definition at line 13 of file TestInputOutputLayerVisitor.cpp.
References IConnectableLayer::Accept(), and NetworkImpl::AddInputLayer().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckInputLayerVisitorBindingIdAndNameNull | ) |
Definition at line 23 of file TestInputOutputLayerVisitor.cpp.
References IConnectableLayer::Accept(), and NetworkImpl::AddInputLayer().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckOutputLayerVisitorBindingIdAndName | ) |
Definition at line 32 of file TestInputOutputLayerVisitor.cpp.
References IConnectableLayer::Accept(), and NetworkImpl::AddOutputLayer().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckOutputLayerVisitorBindingIdAndNameNull | ) |
Definition at line 42 of file TestInputOutputLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddOutputLayer(), and BOOST_AUTO_TEST_SUITE_END().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckConvolution2dLayer | ) |
Definition at line 268 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddConvolution2dLayer(), Float32, Convolution2dDescriptor::m_DataLayout, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, and NHWC.
Referenced by BOOST_AUTO_TEST_CASE(), and QuantizeData().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedConvolution2dLayer | ) |
Definition at line 291 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddConvolution2dLayer(), Float32, Convolution2dDescriptor::m_DataLayout, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, and NHWC.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckConvolution2dLayerWithBiases | ) |
Definition at line 315 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddConvolution2dLayer(), Float32, Convolution2dDescriptor::m_BiasEnabled, Convolution2dDescriptor::m_DataLayout, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, and NHWC.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedConvolution2dLayerWithBiases | ) |
Definition at line 344 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddConvolution2dLayer(), Float32, Convolution2dDescriptor::m_BiasEnabled, Convolution2dDescriptor::m_DataLayout, Convolution2dDescriptor::m_PadBottom, Convolution2dDescriptor::m_PadLeft, Convolution2dDescriptor::m_PadRight, Convolution2dDescriptor::m_PadTop, Convolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, and NHWC.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckDepthwiseConvolution2dLayer | ) |
Definition at line 374 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddDepthwiseConvolution2dLayer(), Float32, DepthwiseConvolution2dDescriptor::m_DataLayout, DepthwiseConvolution2dDescriptor::m_PadBottom, DepthwiseConvolution2dDescriptor::m_PadLeft, DepthwiseConvolution2dDescriptor::m_PadRight, DepthwiseConvolution2dDescriptor::m_PadTop, DepthwiseConvolution2dDescriptor::m_StrideX, DepthwiseConvolution2dDescriptor::m_StrideY, and NHWC.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedDepthwiseConvolution2dLayer | ) |
Definition at line 397 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddDepthwiseConvolution2dLayer(), Float32, DepthwiseConvolution2dDescriptor::m_DataLayout, DepthwiseConvolution2dDescriptor::m_PadBottom, DepthwiseConvolution2dDescriptor::m_PadLeft, DepthwiseConvolution2dDescriptor::m_PadRight, DepthwiseConvolution2dDescriptor::m_PadTop, DepthwiseConvolution2dDescriptor::m_StrideX, DepthwiseConvolution2dDescriptor::m_StrideY, and NHWC.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckDepthwiseConvolution2dLayerWithBiases | ) |
Definition at line 424 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddDepthwiseConvolution2dLayer(), Float32, DepthwiseConvolution2dDescriptor::m_BiasEnabled, DepthwiseConvolution2dDescriptor::m_DataLayout, DepthwiseConvolution2dDescriptor::m_PadBottom, DepthwiseConvolution2dDescriptor::m_PadLeft, DepthwiseConvolution2dDescriptor::m_PadRight, DepthwiseConvolution2dDescriptor::m_PadTop, DepthwiseConvolution2dDescriptor::m_StrideX, DepthwiseConvolution2dDescriptor::m_StrideY, and NHWC.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedDepthwiseConvolution2dLayerWithBiases | ) |
Definition at line 453 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddDepthwiseConvolution2dLayer(), Float32, DepthwiseConvolution2dDescriptor::m_BiasEnabled, DepthwiseConvolution2dDescriptor::m_DataLayout, DepthwiseConvolution2dDescriptor::m_PadBottom, DepthwiseConvolution2dDescriptor::m_PadLeft, DepthwiseConvolution2dDescriptor::m_PadRight, DepthwiseConvolution2dDescriptor::m_PadTop, DepthwiseConvolution2dDescriptor::m_StrideX, DepthwiseConvolution2dDescriptor::m_StrideY, and NHWC.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckFullyConnectedLayer | ) |
Definition at line 483 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddFullyConnectedLayer(), Float32, and FullyConnectedDescriptor::m_TransposeWeightMatrix.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedFullyConnectedLayer | ) |
Definition at line 500 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddFullyConnectedLayer(), Float32, and FullyConnectedDescriptor::m_TransposeWeightMatrix.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckFullyConnectedLayerWithBiases | ) |
Definition at line 518 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddFullyConnectedLayer(), Float32, FullyConnectedDescriptor::m_BiasEnabled, and FullyConnectedDescriptor::m_TransposeWeightMatrix.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedFullyConnectedLayerWithBiases | ) |
Definition at line 541 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddFullyConnectedLayer(), Float32, FullyConnectedDescriptor::m_BiasEnabled, and FullyConnectedDescriptor::m_TransposeWeightMatrix.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckBatchNormalizationLayer | ) |
Definition at line 565 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddBatchNormalizationLayer(), Float32, BatchNormalizationDescriptor::m_DataLayout, BatchNormalizationDescriptor::m_Eps, and NHWC.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeAddition | ) |
Definition at line 568 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedBatchNormalizationLayer | ) |
Definition at line 595 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddBatchNormalizationLayer(), Float32, BatchNormalizationDescriptor::m_DataLayout, BatchNormalizationDescriptor::m_Eps, and NHWC.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckConstLayer | ) |
Definition at line 627 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddConstantLayer(), and Float32.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedConstLayer | ) |
Definition at line 641 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddConstantLayer(), and Float32.
armnn::BOOST_AUTO_TEST_CASE | ( | InputOutputLayerDynamicQuant | ) |
Definition at line 655 of file QuantizerTest.cpp.
References INetworkQuantizer::Create(), CreateNetworkWithInputOutputLayers(), IInputSlot::GetConnection(), TensorInfo::GetDataType(), GetDataTypeName(), IConnectableLayer::GetInputSlot(), GetInputTensorInfo(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), IOutputSlot::GetTensorInfo(), IConnectableLayer::GetType(), IgnoreUnused(), info, and Output.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckLstmLayerBasic | ) |
Definition at line 656 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddLstmLayer(), Float32, LstmDescriptor::m_ActivationFunc, LstmInputParams::m_CellBias, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, and LstmInputParams::m_RecurrentToOutputWeights.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedLstmLayerBasic | ) |
Definition at line 728 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddLstmLayer(), Float32, LstmDescriptor::m_ActivationFunc, LstmInputParams::m_CellBias, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, and LstmInputParams::m_RecurrentToOutputWeights.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeAbsActivation | ) |
Definition at line 729 of file QuantizerTest.cpp.
References Abs, CreateNetworkWithActivationLayer(), ActivationDescriptor::m_A, ActivationDescriptor::m_B, ActivationDescriptor::m_Function, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeArgMax | ) |
Definition at line 742 of file QuantizerTest.cpp.
References CreateNetworkWithArgMinMaxLayer(), ArgMinMaxDescriptor::m_Function, Max, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeLinearActivation | ) |
Definition at line 753 of file QuantizerTest.cpp.
References CreateNetworkWithActivationLayer(), Linear, ActivationDescriptor::m_A, ActivationDescriptor::m_B, ActivationDescriptor::m_Function, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeReLuActivation | ) |
Definition at line 767 of file QuantizerTest.cpp.
References CreateNetworkWithActivationLayer(), ActivationDescriptor::m_A, ActivationDescriptor::m_B, ActivationDescriptor::m_Function, ReLu, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeSoftReLuActivation | ) |
Definition at line 780 of file QuantizerTest.cpp.
References CreateNetworkWithActivationLayer(), ActivationDescriptor::m_A, ActivationDescriptor::m_B, ActivationDescriptor::m_Function, SoftReLu, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeBoundedReluActivation | ) |
Definition at line 793 of file QuantizerTest.cpp.
References BoundedReLu, CreateNetworkWithActivationLayer(), ActivationDescriptor::m_A, ActivationDescriptor::m_B, ActivationDescriptor::m_Function, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckLstmLayerCifgDisabled | ) |
Definition at line 801 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddLstmLayer(), Float32, LstmDescriptor::m_ActivationFunc, LstmInputParams::m_CellBias, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToInputWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToInputWeights, and LstmInputParams::m_RecurrentToOutputWeights.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeTanHActivation | ) |
Definition at line 806 of file QuantizerTest.cpp.
References CreateNetworkWithActivationLayer(), ActivationDescriptor::m_A, ActivationDescriptor::m_B, ActivationDescriptor::m_Function, TanH, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeLeakyReLuActivation | ) |
Definition at line 819 of file QuantizerTest.cpp.
References CreateNetworkWithActivationLayer(), LeakyReLu, ActivationDescriptor::m_A, ActivationDescriptor::m_B, ActivationDescriptor::m_Function, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeELuActivation | ) |
Definition at line 833 of file QuantizerTest.cpp.
References CreateNetworkWithActivationLayer(), Elu, ActivationDescriptor::m_Function, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeHardSwishActivation | ) |
Definition at line 843 of file QuantizerTest.cpp.
References CreateNetworkWithActivationLayer(), HardSwish, ActivationDescriptor::m_Function, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeBatchNorm | ) |
Definition at line 855 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeDepthToSpace | ) |
Definition at line 890 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), NHWC, IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedLstmLayerCifgDisabled | ) |
Definition at line 892 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddLstmLayer(), Float32, LstmDescriptor::m_ActivationFunc, LstmInputParams::m_CellBias, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToInputWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToInputWeights, and LstmInputParams::m_RecurrentToOutputWeights.
armnn::BOOST_AUTO_TEST_CASE | ( | OverrideInputRangeEmptyNetwork | ) |
Definition at line 914 of file QuantizerTest.cpp.
References ApplyStrategyToLayers(), NetworkImpl::GetGraph(), Graph::GetInputLayers(), and RangeTracker::IsEmpty().
armnn::BOOST_AUTO_TEST_CASE | ( | OverrideInputRangeNoInputLayers | ) |
Definition at line 928 of file QuantizerTest.cpp.
References NetworkImpl::AddAdditionLayer(), ApplyStrategyToLayers(), NetworkImpl::GetGraph(), Graph::GetInputLayers(), and RangeTracker::IsEmpty().
armnn::BOOST_AUTO_TEST_CASE | ( | OverrideInputRangeInputLayers | ) |
Definition at line 943 of file QuantizerTest.cpp.
References NetworkImpl::AddAdditionLayer(), NetworkImpl::AddInputLayer(), NetworkImpl::AddOutputLayer(), ApplyStrategyToLayers(), IOutputSlot::Connect(), Float32, NetworkImpl::GetGraph(), IConnectableLayer::GetGuid(), Graph::GetInputLayers(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), RangeTracker::GetRange(), RangeTracker::HasRanges(), info, RangeTracker::IsEmpty(), and IOutputSlot::SetTensorInfo().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckLstmLayerPeephole | ) |
Definition at line 985 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddLstmLayer(), Float32, LstmDescriptor::m_ActivationFunc, LstmInputParams::m_CellBias, LstmInputParams::m_CellToForgetWeights, LstmInputParams::m_CellToOutputWeights, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmDescriptor::m_PeepholeEnabled, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, and LstmInputParams::m_RecurrentToOutputWeights.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeFill | ) |
Definition at line 1040 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), FillDescriptor::m_Value, IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeFullyConnected | ) |
Definition at line 1063 of file QuantizerTest.cpp.
References ValidateFullyConnectedLayer().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeFullyConnectedBiasEnabled | ) |
Definition at line 1068 of file QuantizerTest.cpp.
References ValidateFullyConnectedLayer().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckLstmLayerPeepholeCifgDisabled | ) |
Definition at line 1071 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddLstmLayer(), Float32, LstmDescriptor::m_ActivationFunc, LstmInputParams::m_CellBias, LstmInputParams::m_CellToForgetWeights, LstmInputParams::m_CellToInputWeights, LstmInputParams::m_CellToOutputWeights, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToInputWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmDescriptor::m_PeepholeEnabled, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToInputWeights, and LstmInputParams::m_RecurrentToOutputWeights.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeConvolution2d | ) |
Definition at line 1110 of file QuantizerTest.cpp.
References TestQuantizeConvolution2d().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeConvolution2dWithBiases | ) |
Definition at line 1115 of file QuantizerTest.cpp.
References TestQuantizeConvolution2d().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeDepthwiseConvolution2d | ) |
Definition at line 1157 of file QuantizerTest.cpp.
References TestQuantizeDepthwiseConvolution2d().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeDepthwiseConvolution2dWithBiases | ) |
Definition at line 1162 of file QuantizerTest.cpp.
References TestQuantizeDepthwiseConvolution2d().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeInstanceNormalization | ) |
Definition at line 1167 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedLstmLayerPeephole | ) |
Definition at line 1185 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddLstmLayer(), Float32, LstmDescriptor::m_ActivationFunc, LstmInputParams::m_CellBias, LstmInputParams::m_CellToForgetWeights, LstmInputParams::m_CellToOutputWeights, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmDescriptor::m_PeepholeEnabled, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, and LstmInputParams::m_RecurrentToOutputWeights.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeLogSoftmax | ) |
Definition at line 1187 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), SoftmaxDescriptor::m_Beta, IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeSoftmax | ) |
Definition at line 1231 of file QuantizerTest.cpp.
References CreateNetworkWithSoftmaxLayer(), SoftmaxDescriptor::m_Beta, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeStandIn | ) |
Definition at line 1242 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetworkQuantizer::Create(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), StandInDescriptor::m_NumInputs, StandInDescriptor::m_NumOutputs, QAsymmS8, QSymmS16, QSymmS8, and IOutputSlot::SetTensorInfo().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckLstmLayerProjection | ) |
Definition at line 1273 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddLstmLayer(), Float32, LstmDescriptor::m_ActivationFunc, LstmInputParams::m_CellBias, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmInputParams::m_ProjectionBias, LstmDescriptor::m_ProjectionEnabled, LstmInputParams::m_ProjectionWeights, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, and LstmInputParams::m_RecurrentToOutputWeights.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizePermute | ) |
Definition at line 1320 of file QuantizerTest.cpp.
References CompleteLeakyReluNetwork(), INetwork::Create(), CreateStartOfLeakyReluNetwork(), Float32, info, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeSpaceToBatch | ) |
Definition at line 1338 of file QuantizerTest.cpp.
References CompleteLeakyReluNetwork(), INetwork::Create(), CreateStartOfLeakyReluNetwork(), Float32, info, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeSpaceToDepth | ) |
Definition at line 1356 of file QuantizerTest.cpp.
References CompleteLeakyReluNetwork(), INetwork::Create(), CreateStartOfLeakyReluNetwork(), Float32, info, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedLstmLayerProjection | ) |
Definition at line 1359 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddLstmLayer(), Float32, LstmDescriptor::m_ActivationFunc, LstmInputParams::m_CellBias, LstmDescriptor::m_CifgEnabled, LstmDescriptor::m_ClippingThresCell, LstmDescriptor::m_ClippingThresProj, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmInputParams::m_ProjectionBias, LstmDescriptor::m_ProjectionEnabled, LstmInputParams::m_ProjectionWeights, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, and LstmInputParams::m_RecurrentToOutputWeights.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizePooling2d | ) |
Definition at line 1371 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, LeakyReLu, ActivationDescriptor::m_A, ActivationDescriptor::m_B, ActivationDescriptor::m_Function, IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeConstant | ) |
Definition at line 1403 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeArgMinMax | ) |
Definition at line 1432 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), ArgMinMaxDescriptor::m_Function, Max, IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckQLstmLayerBasic | ) |
Definition at line 1446 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddQLstmLayer(), LstmInputParams::m_CellBias, QLstmDescriptor::m_CellClip, QLstmDescriptor::m_CifgEnabled, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, QLstmDescriptor::m_ProjectionClip, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToOutputWeights, QSymmS8, and Signed32.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeComparison | ) |
Definition at line 1464 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), LessOrEqual, IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeConcat | ) |
Definition at line 1488 of file QuantizerTest.cpp.
References Concat, IOutputSlot::Connect(), INetworkQuantizer::Create(), INetwork::Create(), Float32, g_AsymmU8QuantizationBase, g_SymmS16QuantizationBase, g_SymmS8QuantizationBase, IInputSlot::GetConnection(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), IOutputSlot::GetTensorInfo(), IConnectableLayer::GetType(), IgnoreUnused(), info, Input, Output, QSymmS16, QSymmS8, IOutputSlot::SetTensorInfo(), and VisitLayersTopologically().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedQLstmLayerBasic | ) |
Definition at line 1518 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddQLstmLayer(), LstmInputParams::m_CellBias, QLstmDescriptor::m_CellClip, QLstmDescriptor::m_CifgEnabled, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, QLstmDescriptor::m_ProjectionClip, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToOutputWeights, QSymmS8, and Signed32.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckQLstmLayerCifgDisabled | ) |
Definition at line 1591 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddQLstmLayer(), LstmInputParams::m_CellBias, QLstmDescriptor::m_CellClip, QLstmDescriptor::m_CifgEnabled, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToInputWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, QLstmDescriptor::m_ProjectionClip, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToInputWeights, LstmInputParams::m_RecurrentToOutputWeights, QSymmS8, and Signed32.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeReshape | ) |
Definition at line 1600 of file QuantizerTest.cpp.
References CompleteLeakyReluNetwork(), INetwork::Create(), CreateStartOfLeakyReluNetwork(), Float32, info, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeSplitter | ) |
Definition at line 1618 of file QuantizerTest.cpp.
References CompleteLeakyReluNetwork(), INetwork::Create(), CreateStartOfLeakyReluNetwork(), Float32, info, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeResize | ) |
Definition at line 1635 of file QuantizerTest.cpp.
References CompleteLeakyReluNetwork(), INetwork::Create(), CreateStartOfLeakyReluNetwork(), Float32, info, ResizeDescriptor::m_TargetHeight, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeStridedSlice | ) |
Definition at line 1655 of file QuantizerTest.cpp.
References CompleteLeakyReluNetwork(), INetwork::Create(), CreateStartOfLeakyReluNetwork(), Float32, info, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeBatchToSpace | ) |
Definition at line 1673 of file QuantizerTest.cpp.
References CompleteLeakyReluNetwork(), INetwork::Create(), CreateStartOfLeakyReluNetwork(), Float32, info, and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckQLstmLayerCifgDisabledPeepholeEnabled | ) |
Definition at line 1686 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddQLstmLayer(), LstmInputParams::m_CellBias, QLstmDescriptor::m_CellClip, LstmInputParams::m_CellToForgetWeights, LstmInputParams::m_CellToInputWeights, LstmInputParams::m_CellToOutputWeights, QLstmDescriptor::m_CifgEnabled, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToInputWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, QLstmDescriptor::m_PeepholeEnabled, QLstmDescriptor::m_ProjectionClip, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToInputWeights, LstmInputParams::m_RecurrentToOutputWeights, QSymmS16, QSymmS8, and Signed32.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizePrelu | ) |
Definition at line 1691 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetworkQuantizer::Create(), INetwork::Create(), Float32, g_AsymmS8QuantizationBase, g_AsymmU8QuantizationBase, g_SymmS16QuantizationBase, g_SymmS8QuantizationBase, IInputSlot::GetConnection(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), IOutputSlot::GetTensorInfo(), IConnectableLayer::GetType(), IgnoreUnused(), info, Input, Output, Prelu, QAsymmS8, QSymmS16, QSymmS8, IOutputSlot::SetTensorInfo(), and VisitLayersTopologically().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckQLstmLayerCifgEnabledPeepholeEnabled | ) |
Definition at line 1803 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddQLstmLayer(), LstmInputParams::m_CellBias, QLstmDescriptor::m_CellClip, LstmInputParams::m_CellToForgetWeights, LstmInputParams::m_CellToOutputWeights, QLstmDescriptor::m_CifgEnabled, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, QLstmDescriptor::m_PeepholeEnabled, QLstmDescriptor::m_ProjectionClip, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToOutputWeights, QSymmS16, QSymmS8, and Signed32.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeTransposeConvolution2d | ) |
Definition at line 1854 of file QuantizerTest.cpp.
References TestQuantizeTransposeConvolution2d().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeTransposeConvolution2dWithBiases | ) |
Definition at line 1859 of file QuantizerTest.cpp.
References TestQuantizeTransposeConvolution2d().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeStack | ) |
Definition at line 1864 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetworkQuantizer::Create(), INetwork::Create(), g_AsymmS8QuantizationBase, g_AsymmU8QuantizationBase, g_SymmS16QuantizationBase, g_SymmS8QuantizationBase, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), IOutputSlot::GetTensorInfo(), IConnectableLayer::GetType(), IgnoreUnused(), Input, Output, QAsymmS8, QSymmS16, QSymmS8, Stack, and VisitLayersTopologically().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckQLstmLayerProjectionEnabled | ) |
Definition at line 1893 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddQLstmLayer(), LstmInputParams::m_CellBias, QLstmDescriptor::m_CellClip, QLstmDescriptor::m_CifgEnabled, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToOutputWeights, LstmInputParams::m_OutputGateBias, LstmInputParams::m_ProjectionBias, QLstmDescriptor::m_ProjectionClip, QLstmDescriptor::m_ProjectionEnabled, LstmInputParams::m_ProjectionWeights, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToOutputWeights, QSymmS8, and Signed32.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeSlice | ) |
Definition at line 1950 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, IOutputSlot::SetTensorInfo(), and TestNetwork().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckQLstmLayerCifgDisabledLayerNormEnabled | ) |
Definition at line 1983 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddQLstmLayer(), LstmInputParams::m_CellBias, QLstmDescriptor::m_CellClip, LstmInputParams::m_CellLayerNormWeights, QLstmDescriptor::m_CifgEnabled, LstmInputParams::m_ForgetGateBias, LstmInputParams::m_ForgetLayerNormWeights, LstmInputParams::m_InputGateBias, LstmInputParams::m_InputLayerNormWeights, LstmInputParams::m_InputToCellWeights, LstmInputParams::m_InputToForgetWeights, LstmInputParams::m_InputToInputWeights, LstmInputParams::m_InputToOutputWeights, QLstmDescriptor::m_LayerNormEnabled, LstmInputParams::m_OutputGateBias, LstmInputParams::m_OutputLayerNormWeights, QLstmDescriptor::m_ProjectionClip, LstmInputParams::m_RecurrentToCellWeights, LstmInputParams::m_RecurrentToForgetWeights, LstmInputParams::m_RecurrentToInputWeights, LstmInputParams::m_RecurrentToOutputWeights, QSymmS16, QSymmS8, and Signed32.
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeInf | ) |
Definition at line 1985 of file QuantizerTest.cpp.
References SetupQuantize().
armnn::BOOST_AUTO_TEST_CASE | ( | QuantizeNegativeInf | ) |
Definition at line 1990 of file QuantizerTest.cpp.
References Dequantize, IInputSlot::GetConnection(), TensorInfo::GetDataType(), GetDataTypeName(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetShape(), IOutputSlot::GetTensorInfo(), IConnectableLayer::GetType(), IgnoreUnused(), info, Input, Output, Quantize, and SetupQuantize().
armnn::BOOST_AUTO_TEST_CASE | ( | PreserveTypeFloat32 | ) |
Definition at line 2095 of file QuantizerTest.cpp.
References Float32, and PreserveTypeTestImpl().
armnn::BOOST_AUTO_TEST_CASE | ( | PreserveTypeQAsymmU8 | ) |
Definition at line 2100 of file QuantizerTest.cpp.
References PreserveTypeTestImpl(), and QAsymmU8.
armnn::BOOST_AUTO_TEST_CASE | ( | PreserveTypeQsymm8 | ) |
Definition at line 2105 of file QuantizerTest.cpp.
References PreserveTypeTestImpl(), and QSymmS8.
armnn::BOOST_AUTO_TEST_CASE | ( | CheckQuantizedLstmLayer | ) |
Definition at line 2107 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddQuantizedLstmLayer(), QuantizedLstmInputParams::m_CellBias, QuantizedLstmInputParams::m_ForgetGateBias, QuantizedLstmInputParams::m_InputGateBias, QuantizedLstmInputParams::m_InputToCellWeights, QuantizedLstmInputParams::m_InputToForgetWeights, QuantizedLstmInputParams::m_InputToInputWeights, QuantizedLstmInputParams::m_InputToOutputWeights, QuantizedLstmInputParams::m_OutputGateBias, QuantizedLstmInputParams::m_RecurrentToCellWeights, QuantizedLstmInputParams::m_RecurrentToForgetWeights, QuantizedLstmInputParams::m_RecurrentToInputWeights, QuantizedLstmInputParams::m_RecurrentToOutputWeights, QSymmS8, and Signed32.
armnn::BOOST_AUTO_TEST_CASE | ( | PreserveTypeQsymm16 | ) |
Definition at line 2110 of file QuantizerTest.cpp.
References PreserveTypeTestImpl(), and QSymmS16.
armnn::BOOST_AUTO_TEST_CASE | ( | TestConnectionPreservationAfterDynamicQuant | ) |
Definition at line 2115 of file QuantizerTest.cpp.
References Addition, BOOST_AUTO_TEST_SUITE_END(), IOutputSlot::Connect(), INetworkQuantizer::Create(), Float32, IInputSlot::GetConnection(), IConnectableLayer::GetGuid(), IConnectableLayer::GetInputSlot(), GetInputTensorInfo(), IConnectableLayer::GetName(), IConnectableLayer::GetOutputSlot(), IOutputSlot::GetOwningLayerGuid(), IConnectableLayer::GetType(), IgnoreUnused(), ActivationDescriptor::m_Function, ReLu, IOutputSlot::SetTensorInfo(), TestNetwork(), and VisitLayersTopologically().
armnn::BOOST_AUTO_TEST_CASE | ( | CheckNamedQuantizedLstmLayer | ) |
Definition at line 2196 of file ConstTensorLayerVisitor.cpp.
References IConnectableLayer::Accept(), NetworkImpl::AddQuantizedLstmLayer(), BOOST_AUTO_TEST_SUITE_END(), QuantizedLstmInputParams::m_CellBias, QuantizedLstmInputParams::m_ForgetGateBias, QuantizedLstmInputParams::m_InputGateBias, QuantizedLstmInputParams::m_InputToCellWeights, QuantizedLstmInputParams::m_InputToForgetWeights, QuantizedLstmInputParams::m_InputToInputWeights, QuantizedLstmInputParams::m_InputToOutputWeights, QuantizedLstmInputParams::m_OutputGateBias, QuantizedLstmInputParams::m_RecurrentToCellWeights, QuantizedLstmInputParams::m_RecurrentToForgetWeights, QuantizedLstmInputParams::m_RecurrentToInputWeights, QuantizedLstmInputParams::m_RecurrentToOutputWeights, QAsymmU8, and Signed32.
std::ostream& armnn::boost_test_print_type | ( | std::ostream & | ostr, |
const TensorInfo & | right | ||
) |
Definition at line 14 of file TensorTest.cpp.
References TensorInfo::GetNumDimensions(), and TensorInfo::GetShape().
std::ostream& armnn::boost_test_print_type | ( | std::ostream & | ostr, |
const TensorShape & | shape | ||
) |
Definition at line 26 of file TensorTest.cpp.
References BOOST_AUTO_TEST_SUITE(), and TensorShape::GetNumDimensions().
int armnn::CalcLevel | ( | const Event * | eventPtr | ) |
Definition at line 235 of file Profiling.cpp.
References Event::GetParentEvent().
Referenced by ProfilerImpl::AnalyzeEventsAndWriteResults(), and ProfilerImpl::PopulateInferences().
EdgeStrategy armnn::CalculateEdgeStrategy | ( | BackendsMap & | backends, |
ITensorHandleFactory::FactoryId | srcFactoryId, | ||
const Layer & | layer, | ||
const Layer & | connectedLayer, | ||
TensorHandleFactoryRegistry & | registry, | ||
bool | importEnabled | ||
) |
Definition at line 1348 of file Network.cpp.
References ARMNN_ASSERT_MSG, CopyToTarget, DirectCompatibility, ExportToTarget, Layer::GetBackendId(), ITensorHandleFactory::GetCapabilities(), ITensorHandleFactory::GetExportFlags(), TensorHandleFactoryRegistry::GetFactory(), ITensorHandleFactory::GetImportFlags(), Layer::GetType(), ITensorHandleFactory::LegacyFactoryId, Output, PaddingRequired, ITensorHandleFactory::SupportsMapUnmap(), and Undefined.
Referenced by SelectTensorHandleStrategy().
ITensorHandleFactory::FactoryId armnn::CalculateSlotOption | ( | BackendsMap & | backends, |
OutputSlot & | outputSlot, | ||
TensorHandleFactoryRegistry & | registry | ||
) |
Definition at line 1239 of file Network.cpp.
References ARMNN_ASSERT_MSG, Layer::GetBackendId(), OutputSlot::GetConnections(), TensorHandleFactoryRegistry::GetFactory(), IBackendInternal::GetHandleFactoryPreferences(), OutputSlot::GetOwningLayer(), Layer::GetType(), ITensorHandleFactory::LegacyFactoryId, Output, RequiresCopy(), and ITensorHandleFactory::SupportsMapUnmap().
Referenced by SelectTensorHandleStrategy().
ITensorHandleFactory::FactoryId armnn::CalculateSlotOptionForInput | ( | BackendsMap & | backends, |
OutputSlot & | slot, | ||
TensorHandleFactoryRegistry & | registry | ||
) |
Definition at line 1147 of file Network.cpp.
References ARMNN_ASSERT, ARMNN_ASSERT_MSG, CheckFlag(), Layer::GetBackendId(), OutputSlot::GetConnections(), TensorHandleFactoryRegistry::GetFactory(), ITensorHandleFactory::GetImportFlags(), OutputSlot::GetOwningLayer(), Layer::GetType(), Input, ITensorHandleFactory::LegacyFactoryId, Malloc, and ITensorHandleFactory::SupportsMapUnmap().
Referenced by SelectTensorHandleStrategy().
ITensorHandleFactory::FactoryId armnn::CalculateSlotOptionForOutput | ( | BackendsMap & | backends, |
OutputSlot & | slot, | ||
TensorHandleFactoryRegistry & | registry | ||
) |
Definition at line 1229 of file Network.cpp.
References ITensorHandleFactory::DeferredFactoryId, and IgnoreUnused().
Referenced by SelectTensorHandleStrategy().
|
inline |
Definition at line 47 of file MemorySources.hpp.
Referenced by CalculateSlotOptionForInput(), and LoadedNetwork::EnqueueWorkload().
void armnn::CheckLayerBindingId | ( | LayerBindingId | visitorId, |
LayerBindingId | id | ||
) |
Definition at line 13 of file TestInputOutputLayerVisitor.hpp.
Referenced by TestInputLayerVisitor::VisitInputLayer(), and TestOutputLayerVisitor::VisitOutputLayer().
bool armnn::CheckScaleSetOnQuantizedType | ( | Layer * | layer, |
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 602 of file Network.cpp.
References ARMNN_LOG, TensorInfo::GetDataType(), GetLayerTypeAsCString(), Layer::GetNameStr(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), OutputSlot::GetTensorInfo(), Layer::GetType(), info, QAsymmU8, ReportError(), TensorInfo::SetQuantizationOffset(), TensorInfo::SetQuantizationScale(), OutputSlot::SetTensorInfo(), Softmax, and warning.
Referenced by AssignBackends().
bool armnn::CheckSupportRule | ( | F | rule, |
Optional< std::string &> | reasonIfUnsupported, | ||
const char * | reason | ||
) |
Definition at line 38 of file LayerSupportRules.hpp.
References OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().
Referenced by RefLayerSupport::IsActivationSupported(), RefLayerSupport::IsAdditionSupported(), RefLayerSupport::IsArgMinMaxSupported(), RefLayerSupport::IsBatchNormalizationSupported(), RefLayerSupport::IsBatchToSpaceNdSupported(), RefLayerSupport::IsComparisonSupported(), RefLayerSupport::IsConcatSupported(), RefLayerSupport::IsConstantSupported(), RefLayerSupport::IsConvertBf16ToFp32Supported(), RefLayerSupport::IsConvertFp32ToBf16Supported(), RefLayerSupport::IsConvolution2dSupported(), RefLayerSupport::IsDebugSupported(), RefLayerSupport::IsDepthToSpaceSupported(), RefLayerSupport::IsDepthwiseConvolutionSupported(), RefLayerSupport::IsDequantizeSupported(), RefLayerSupport::IsDetectionPostProcessSupported(), RefLayerSupport::IsDivisionSupported(), RefLayerSupport::IsElementwiseUnarySupported(), RefLayerSupport::IsFakeQuantizationSupported(), RefLayerSupport::IsFillSupported(), RefLayerSupport::IsFloorSupported(), RefLayerSupport::IsFullyConnectedSupported(), RefLayerSupport::IsGatherSupported(), RefLayerSupport::IsInstanceNormalizationSupported(), RefLayerSupport::IsL2NormalizationSupported(), RefLayerSupport::IsLogicalBinarySupported(), RefLayerSupport::IsLogSoftmaxSupported(), RefLayerSupport::IsLstmSupported(), RefLayerSupport::IsMaximumSupported(), RefLayerSupport::IsMeanSupported(), RefLayerSupport::IsMemCopySupported(), RefLayerSupport::IsMinimumSupported(), RefLayerSupport::IsMultiplicationSupported(), RefLayerSupport::IsNormalizationSupported(), RefLayerSupport::IsPadSupported(), RefLayerSupport::IsPermuteSupported(), RefLayerSupport::IsPooling2dSupported(), RefLayerSupport::IsPreluSupported(), RefLayerSupport::IsQuantizeSupported(), RefLayerSupport::IsRankSupported(), RefLayerSupport::IsReduceSupported(), RefLayerSupport::IsReshapeSupported(), RefLayerSupport::IsResizeBilinearSupported(), RefLayerSupport::IsResizeSupported(), RefLayerSupport::IsSliceSupported(), RefLayerSupport::IsSoftmaxSupported(), RefLayerSupport::IsSpaceToBatchNdSupported(), RefLayerSupport::IsSpaceToDepthSupported(), RefLayerSupport::IsSplitterSupported(), RefLayerSupport::IsStackSupported(), RefLayerSupport::IsStridedSliceSupported(), RefLayerSupport::IsSubtractionSupported(), RefLayerSupport::IsTransposeConvolution2dSupported(), and RefLayerSupport::IsTransposeSupported().
bool armnn::CheckTensorDataTypesEqual | ( | const TensorInfo & | input0, |
const TensorInfo & | input1 | ||
) |
Definition at line 64 of file LayerSupport.cpp.
References TensorInfo::GetDataType().
Referenced by IsAdditionSupported().
arm_compute::Status ClAbsWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 19 of file ClAbsWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClActivationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ActivationDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClActivationWorkload.cpp.
Referenced by ClLayerSupport::IsActivationSupported().
arm_compute::Status ClAdditionValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 43 of file ClAdditionWorkload.cpp.
Referenced by ClLayerSupport::IsAdditionSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClArgMinMaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ArgMinMaxDescriptor & | descriptor | ||
) |
Definition at line 31 of file ClArgMinMaxWorkload.cpp.
Referenced by ClLayerSupport::IsArgMinMaxSupported().
constexpr const char* armnn::ClBackendId | ( | ) |
Definition at line 10 of file ClBackendId.hpp.
Referenced by ClBackend::GetIdStatic().
arm_compute::Status ClBatchNormalizationValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TensorInfo & | mean, | ||
const TensorInfo & | var, | ||
const TensorInfo & | beta, | ||
const TensorInfo & | gamma, | ||
const BatchNormalizationDescriptor & | desc, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 19 of file ClBatchNormalizationFloatWorkload.cpp.
Referenced by ClLayerSupport::IsBatchNormalizationSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClBatchToSpaceNdWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const BatchToSpaceNdDescriptor & | desc | ||
) |
Definition at line 48 of file ClBatchToSpaceNdWorkload.cpp.
References BatchToSpaceNdDescriptor::m_DataLayout.
Referenced by ClLayerSupport::IsBatchToSpaceNdSupported().
arm_compute::Status ClComparisonWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ComparisonDescriptor & | descriptor | ||
) |
Definition at line 24 of file ClComparisonWorkload.cpp.
Referenced by ClLayerSupport::IsComparisonSupported().
arm_compute::Status ClConcatWorkloadValidate | ( | const std::vector< const TensorInfo *> & | inputs, |
const TensorInfo & | output, | ||
const OriginsDescriptor & | descriptor | ||
) |
Definition at line 27 of file ClConcatWorkload.cpp.
Referenced by ClLayerSupport::IsConcatSupported().
arm_compute::Status ClConstantWorkloadValidate | ( | const TensorInfo & | output | ) |
Definition at line 18 of file ClConstantWorkload.cpp.
Referenced by ClLayerSupport::IsConstantSupported().
|
inline |
Definition at line 152 of file ClContextSchema_generated.h.
References ClContextIdentifier().
|
inline |
Definition at line 167 of file ClContextSchema_generated.h.
|
inline |
Definition at line 148 of file ClContextSchema_generated.h.
Referenced by ClContextBufferHasIdentifier(), FinishClContextBuffer(), FinishSizePrefixedClContextBuffer(), VerifyClContextBuffer(), and VerifySizePrefixedClContextBuffer().
arm_compute::Status ClConvertFp16ToFp32WorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 37 of file ClConvertFp16ToFp32Workload.cpp.
References Float16, Float32, and TensorInfo::GetDataType().
Referenced by ClLayerSupport::IsConvertFp16ToFp32Supported().
arm_compute::Status ClConvertFp32ToFp16WorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 37 of file ClConvertFp32ToFp16Workload.cpp.
References Float16, Float32, and TensorInfo::GetDataType().
Referenced by ClLayerSupport::IsConvertFp32ToFp16Supported().
arm_compute::Status ClConvolution2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Convolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
bool | isFastMathEnabled, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 23 of file ClConvolution2dWorkload.cpp.
Referenced by ClLayerSupport::IsConvolution2dSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClDepthToSpaceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const DepthToSpaceDescriptor & | desc | ||
) |
Definition at line 22 of file ClDepthToSpaceWorkload.cpp.
References SpaceToDepthDescriptor::m_DataLayout.
Referenced by ClLayerSupport::IsDepthToSpaceSupported().
arm_compute::Status ClDepthwiseConvolutionWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const DepthwiseConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 26 of file ClDepthwiseConvolutionWorkload.cpp.
Referenced by ClLayerSupport::IsDepthwiseConvolutionSupported(), ClLayerSupport::IsDilatedDepthwiseConvolutionSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClDequantizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 22 of file ClDequantizeWorkload.cpp.
Referenced by ClLayerSupport::IsDequantizeSupported().
arm_compute::Status ClDivisionWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 18 of file ClDivisionFloatWorkload.cpp.
Referenced by ClLayerSupport::IsDivisionSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClExpWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file ClExpWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClFloorWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 14 of file ClFloorFloatWorkload.cpp.
Referenced by ClLayerSupport::IsFloorSupported().
arm_compute::Status ClFullyConnectedWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TensorInfo & | weights, | ||
const TensorInfo & | biases, | ||
const FullyConnectedDescriptor & | descriptor, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 19 of file ClFullyConnectedWorkload.cpp.
Referenced by ClLayerSupport::IsFullyConnectedSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClGatherWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | indices, | ||
const TensorInfo & | output, | ||
const GatherDescriptor & | descriptor | ||
) |
Definition at line 15 of file ClGatherWorkload.cpp.
Referenced by ClLayerSupport::IsGatherSupported().
arm_compute::Status ClInstanceNormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const InstanceNormalizationDescriptor & | descriptor | ||
) |
Definition at line 18 of file ClInstanceNormalizationWorkload.cpp.
Referenced by ClLayerSupport::IsInstanceNormalizationSupported().
arm_compute::Status ClL2NormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const L2NormalizationDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClL2NormalizationFloatWorkload.cpp.
Referenced by ClLayerSupport::IsL2NormalizationSupported().
arm_compute::Status ClLogicalAndWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 20 of file ClLogicalAndWorkload.cpp.
Referenced by ClLayerSupport::IsLogicalBinarySupported().
arm_compute::Status ClLogicalNotWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 20 of file ClLogicalNotWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClLogicalOrWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 20 of file ClLogicalOrWorkload.cpp.
Referenced by ClLayerSupport::IsLogicalBinarySupported().
arm_compute::Status ClLogSoftmaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const LogSoftmaxDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClLogSoftmaxWorkload.cpp.
Referenced by ClLayerSupport::IsLogSoftmaxSupported().
arm_compute::Status ClLstmFloatWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | scratchBuffer, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const LstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 261 of file ClLstmFloatWorkload.cpp.
Referenced by ClLayerSupport::IsLstmSupported().
arm_compute::Status ClMaximumWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 24 of file ClMaximumWorkload.cpp.
Referenced by ClLayerSupport::IsMaximumSupported().
arm_compute::Status ClMeanValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const MeanDescriptor & | desc | ||
) |
Definition at line 17 of file ClMeanWorkload.cpp.
Referenced by ClLayerSupport::IsMeanSupported().
arm_compute::Status ClMinimumWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 24 of file ClMinimumWorkload.cpp.
Referenced by ClLayerSupport::IsMinimumSupported().
arm_compute::Status ClMultiplicationWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 18 of file ClMultiplicationWorkload.cpp.
Referenced by ClLayerSupport::IsMultiplicationSupported(), and ClBackend::OptimizeSubgraphView().
arm_compute::Status ClNegWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file ClNegWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClNormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const NormalizationDescriptor & | descriptor | ||
) |
Definition at line 19 of file ClNormalizationFloatWorkload.cpp.
Referenced by ClLayerSupport::IsNormalizationSupported().
arm_compute::Status ClPadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const PadDescriptor & | descriptor | ||
) |
Definition at line 47 of file ClPadWorkload.cpp.
Referenced by ClLayerSupport::IsPadSupported().
arm_compute::Status ClPermuteWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const PermuteDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClPermuteWorkload.cpp.
Referenced by ClLayerSupport::IsPermuteSupported().
arm_compute::Status ClPooling2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Pooling2dDescriptor & | descriptor | ||
) |
Definition at line 18 of file ClPooling2dWorkload.cpp.
Referenced by ClLayerSupport::IsPooling2dSupported().
arm_compute::Status ClPreluWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | alpha, | ||
const TensorInfo & | output | ||
) |
Definition at line 16 of file ClPreluWorkload.cpp.
Referenced by ClLayerSupport::IsPreluSupported().
arm_compute::Status ClQLstmWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | output, | ||
const QLstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 237 of file ClQLstmWorkload.cpp.
Referenced by ClLayerSupport::IsQLstmSupported().
arm_compute::Status ClQuantizedLstmWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | previousCellStateIn, | ||
const TensorInfo & | previousOutputIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const QuantizedLstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 18 of file ClQuantizedLstmWorkload.cpp.
Referenced by ClLayerSupport::IsQuantizedLstmSupported().
arm_compute::Status ClQuantizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 22 of file ClQuantizeWorkload.cpp.
Referenced by ClLayerSupport::IsQuantizeSupported().
arm_compute::Status ClReduceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ReduceDescriptor & | desc | ||
) |
Definition at line 18 of file ClReduceWorkload.cpp.
Referenced by ClLayerSupport::IsReduceSupported().
arm_compute::Status ClReshapeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 15 of file ClReshapeWorkload.cpp.
Referenced by ClLayerSupport::IsReshapeSupported().
arm_compute::Status ClResizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ResizeDescriptor & | descriptor | ||
) |
Definition at line 22 of file ClResizeWorkload.cpp.
Referenced by ClLayerSupport::IsResizeSupported().
arm_compute::Status ClRsqrtWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file ClRsqrtWorkload.cpp.
Referenced by ClLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status ClSliceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SliceDescriptor & | descriptor | ||
) |
Definition at line 18 of file ClSliceWorkload.cpp.
Referenced by ClLayerSupport::IsSliceSupported().
arm_compute::Status ClSoftmaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SoftmaxDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClSoftmaxWorkload.cpp.
Referenced by ClLayerSupport::IsSoftmaxSupported().
arm_compute::Status ClSpaceToBatchNdWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SpaceToBatchNdDescriptor & | descriptor | ||
) |
Definition at line 23 of file ClSpaceToBatchNdWorkload.cpp.
Referenced by ClLayerSupport::IsSpaceToBatchNdSupported().
arm_compute::Status ClSpaceToDepthWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SpaceToDepthDescriptor & | desc | ||
) |
Definition at line 46 of file ClSpaceToDepthWorkload.cpp.
References SpaceToDepthDescriptor::m_DataLayout.
Referenced by ClLayerSupport::IsSpaceToDepthSupported().
arm_compute::Status ClSplitterWorkloadValidate | ( | const TensorInfo & | input, |
const std::vector< std::reference_wrapper< TensorInfo >> & | outputs, | ||
unsigned int | splitAxis | ||
) |
Definition at line 31 of file ClSplitterWorkload.cpp.
Referenced by ClLayerSupport::IsSplitterSupported().
arm_compute::Status ClStackWorkloadValidate | ( | const std::vector< const TensorInfo *> & | inputs, |
const TensorInfo & | output, | ||
const StackDescriptor & | descriptor | ||
) |
Definition at line 29 of file ClStackWorkload.cpp.
Referenced by ClLayerSupport::IsStackSupported().
arm_compute::Status ClStridedSliceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const StridedSliceDescriptor & | descriptor | ||
) |
Definition at line 27 of file ClStridedSliceWorkload.cpp.
Referenced by ClLayerSupport::IsStridedSliceSupported().
arm_compute::Status ClSubtractionValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 43 of file ClSubtractionWorkload.cpp.
Referenced by ClLayerSupport::IsSubtractionSupported(), and ClBackend::OptimizeSubgraphView().
constexpr const char* armnn::ClTensorHandleFactoryId | ( | ) |
Definition at line 15 of file ClTensorHandleFactory.hpp.
Referenced by ClTensorHandleFactory::GetIdStatic().
arm_compute::Status ClTransposeConvolution2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TransposeConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases | ||
) |
Definition at line 26 of file ClTransposeConvolution2dWorkload.cpp.
Referenced by ClLayerSupport::IsTransposeConvolution2dSupported().
arm_compute::Status ClTransposeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TransposeDescriptor & | descriptor | ||
) |
Definition at line 17 of file ClTransposeWorkload.cpp.
Referenced by ClLayerSupport::IsTransposeSupported().
MemorySourceFlags armnn::Combine | ( | Arg | sourceA, |
Arg | sourceB | ||
) |
MemorySourceFlags armnn::Combine | ( | Arg | source, |
Args... | rest | ||
) |
bool armnn::CompatibleTypes | ( | DataType | ) |
Definition at line 17 of file CompatibleTypes.hpp.
|
inline |
Definition at line 35 of file CompatibleTypes.hpp.
References BFloat16.
|
inline |
Definition at line 23 of file CompatibleTypes.hpp.
References Float32.
|
inline |
Definition at line 29 of file CompatibleTypes.hpp.
References Float16.
|
inline |
Definition at line 57 of file CompatibleTypes.hpp.
References QSymmS16.
|
inline |
Definition at line 63 of file CompatibleTypes.hpp.
References Signed32.
|
inline |
Definition at line 47 of file CompatibleTypes.hpp.
References ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, QAsymmS8, QSymmS8, and QuantizedSymm8PerAxis.
|
inline |
Definition at line 41 of file CompatibleTypes.hpp.
void armnn::CompleteLeakyReluNetwork | ( | INetwork * | network, |
IConnectableLayer * | activation, | ||
IConnectableLayer * | layerUnderTest, | ||
const TensorInfo & | info | ||
) |
Definition at line 1304 of file QuantizerTest.cpp.
References INetwork::AddOutputLayer(), IOutputSlot::Connect(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), and IOutputSlot::SetTensorInfo().
Referenced by BOOST_AUTO_TEST_CASE().
|
inline |
Function to convert ArmNN axis (left to right) to ACL axis (right to left) ranging from [-rank, rank)
Definition at line 230 of file ArmComputeUtils.hpp.
References ARMNN_ASSERT, and TensorInfo::GetNumDimensions().
Referenced by ClGatherWorkload::ClGatherWorkload(), ClLogSoftmaxWorkload::ClLogSoftmaxWorkload(), ClSoftmaxWorkload::ClSoftmaxWorkload(), NeonGatherWorkload::NeonGatherWorkload(), NeonLogSoftmaxWorkload::NeonLogSoftmaxWorkload(), and NeonSoftmaxWorkload::NeonSoftmaxWorkload().
|
inline |
Function to convert axis to its positive equivalent value.
[-rank, rank) –> [0, rank)
Definition at line 246 of file ArmComputeUtils.hpp.
References ARMNN_ASSERT, and TensorInfo::GetNumDimensions().
|
inline |
Definition at line 191 of file ArmComputeUtils.hpp.
References ARMNN_ASSERT, TensorInfo::GetNumDimensions(), and SoftmaxDescriptor::m_Axis.
|
inline |
Definition at line 210 of file ArmComputeUtils.hpp.
References ViewsDescriptor::GetNumDimensions(), ViewsDescriptor::GetNumViews(), and ViewsDescriptor::GetViewSizes().
Referenced by ClSplitterWorkload::ClSplitterWorkload(), SplitterLayer::CreateWorkload(), ClLayerSupport::IsSplitterSupported(), NeonLayerSupport::IsSplitterSupported(), and NeonSplitterWorkload::NeonSplitterWorkload().
void Concatenate | ( | const ConcatQueueDescriptor & | data | ) |
Definition at line 14 of file Concatenate.cpp.
References ARMNN_ASSERT, TensorInfo::GetNumDimensions(), TensorInfo::GetShape(), GetTensorInfo(), QueueDescriptor::m_Inputs, ConcatQueueDescriptor::ViewOrigin::m_Origin, QueueDescriptor::m_Outputs, ConcatQueueDescriptor::m_ViewOrigins, and MaxNumOfTensorDimensions.
Referenced by RefConcatWorkload::Execute().
void armnn::ConditionalThrow | ( | bool | condition, |
const std::string & | message | ||
) |
Definition at line 159 of file Exceptions.hpp.
void armnn::ConditionalThrow | ( | bool | condition | ) |
Definition at line 168 of file Exceptions.hpp.
void armnn::ConditionalThrowIfNotEqual | ( | const std::string & | message, |
const ComparedType & | leftHandSide, | ||
const ComparedType & | rightHandSide | ||
) |
ComparedType must support: operator==(const ComparedType&) operator<<(ostream&, const ComparedType&)
Definition at line 183 of file Exceptions.hpp.
void ConfigureLogging | ( | bool | printToStandardOutput, |
bool | printToDebugOutput, | ||
LogSeverity | severity | ||
) |
Configures the logging behaviour of the ARMNN library.
printToStandardOutput: Set to true if log messages should be printed to the standard output. printToDebugOutput: Set to true if log messages be printed to a platform-specific debug output (where supported). severity: All log messages that are at this severity level or higher will be printed, others will be ignored.
Definition at line 18 of file Utils.cpp.
References SetAllLoggingSinks(), SetLogFilter(), and Trace.
Referenced by ConfigureLoggingTest(), armnn::test::InferenceTestMain(), LogLevelSwapper::LogLevelSwapper(), main(), and LogLevelSwapper::~LogLevelSwapper().
void armnn::ConfigureTuner | ( | arm_compute::CLTuner & | tuner, |
TuningLevel | level | ||
) |
Definition at line 115 of file ClBackendContext.cpp.
References ARMNN_LOG, Exhaustive, info, None, Normal, and Rapid.
Referenced by ClBackendContext::ClBackendContext().
|
inline |
Definition at line 75 of file ArmComputeUtils.hpp.
References ConvertActivationFunctionToAclActivationFunction(), ActivationDescriptor::m_A, ActivationDescriptor::m_B, and ActivationDescriptor::m_Function.
Referenced by ClActivationWorkload::ClActivationWorkload(), ConvertActivationDescriptorToAclActivationLayerInfo(), ConvertAdditionalInfoToAclActivationLayerInfo(), ConvertFullyConnectedDescriptorToAclFullyConnectedLayerInfo(), and NeonActivationWorkload::NeonActivationWorkload().
|
inline |
Definition at line 82 of file ArmComputeUtils.hpp.
References ConvertActivationDescriptorToAclActivationLayerInfo().
|
inline |
Definition at line 51 of file ArmComputeUtils.hpp.
References Abs, BoundedReLu, Elu, HardSwish, LeakyReLu, Linear, ReLu, Sigmoid, SoftReLu, Sqrt, Square, and TanH.
Referenced by ConvertActivationDescriptorToAclActivationLayerInfo().
|
inline |
Definition at line 93 of file ArmComputeUtils.hpp.
References ConvertActivationDescriptorToAclActivationLayerInfo(), and QueueDescriptor::GetAdditionalInformation().
Referenced by ClAdditionWorkload::ClAdditionWorkload(), ClDivisionFloatWorkload::ClDivisionFloatWorkload(), ClMultiplicationWorkload::ClMultiplicationWorkload(), ClSubtractionWorkload::ClSubtractionWorkload(), NeonAdditionWorkload::NeonAdditionWorkload(), NeonDivisionWorkload::NeonDivisionWorkload(), NeonMultiplicationWorkload::NeonMultiplicationWorkload(), and NeonSubtractionWorkload::NeonSubtractionWorkload().
LayerT* armnn::ConvertBf16ToFp32Weight | ( | Layer * | l | ) |
Definition at line 638 of file Network.cpp.
References BFloat16, FloatingPointConverter::ConvertBFloat16ToFloat32(), Convolution2d, Float32, FullyConnected, TensorInfo::GetDataType(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), and info.
|
inline |
Definition at line 105 of file ArmComputeUtils.hpp.
References Equal, Greater, GreaterOrEqual, Less, LessOrEqual, ComparisonDescriptor::m_Operation, and NotEqual.
Referenced by ClComparisonWorkload::ClComparisonWorkload(), and NeonComparisonWorkload::NeonComparisonWorkload().
|
inline |
Definition at line 158 of file ArmComputeUtils.hpp.
References ConvertActivationDescriptorToAclActivationLayerInfo(), and FullyConnectedDescriptor::m_TransposeWeightMatrix.
|
inline |
Definition at line 168 of file ArmComputeUtils.hpp.
References FullyConnectedDescriptor::m_TransposeWeightMatrix.
constexpr LogSeverity armnn::ConvertLogSeverity | ( | BoostLogSeverityMapping | severity | ) |
Definition at line 196 of file Logging.hpp.
int32_t ConvertMaskToACLFormat | ( | int32_t | mask, |
int32_t | numDim | ||
) |
Definition at line 193 of file WorkloadUtils.cpp.
Referenced by ClStridedSliceWorkload::ClStridedSliceWorkload(), GatherTensorHandlePairs(), and NeonStridedSliceWorkload::NeonStridedSliceWorkload().
|
inline |
|
inline |
|
inline |
|
inline |
Definition at line 258 of file ArmComputeUtils.hpp.
References ReduceDescriptor::m_ReduceOperation, Max, Mean, Min, and Sum.
|
inline |
armnn::ConstTensor ConvertWeightTensorFromArmnnToAcl | ( | const ConstCpuTensorHandle * | weightTensor, |
DataLayout | dataLayout, | ||
void * | permuteBuffer | ||
) |
Definition at line 133 of file WorkloadUtils.cpp.
References ARMNN_ASSERT_MSG, ARMNN_FALLTHROUGH, ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, Float16, Float32, BaseTensor< MemoryType >::GetDataType(), BaseTensor< MemoryType >::GetInfo(), TensorInfo::GetShape(), ConstCpuTensorHandle::GetTensorInfo(), NCHW, NHWC, PermuteTensor(), QAsymmS8, QAsymmU8, QSymmS8, QuantizedSymm8PerAxis, and ReshapeWeightsForAcl().
Referenced by ClDepthwiseConvolutionWorkload::ClDepthwiseConvolutionWorkload(), GatherTensorHandlePairs(), and NeonDepthwiseConvolutionWorkload::NeonDepthwiseConvolutionWorkload().
TensorInfo ConvertWeightTensorInfoFromArmnnToAcl | ( | const TensorInfo & | weightInfo, |
DataLayout | dataLayout | ||
) |
Definition at line 110 of file WorkloadUtils.cpp.
References NHWC, armnnUtils::Permuted(), and ReshapeWeightsForAcl().
Referenced by GatherTensorHandlePairs().
void Convolve | ( | const TensorShape & | rInputShape, |
Decoder< float > & | rInputDecoder, | ||
const TensorShape & | rOutputShape, | ||
Encoder< float > & | rOutputEncoder, | ||
const TensorShape & | rFilterShape, | ||
Decoder< float > & | rFilterDecoder, | ||
bool | biasEnabled, | ||
Decoder< float > * | pBiasDecoder, | ||
DataLayout | dataLayout, | ||
unsigned int | paddingTop, | ||
unsigned int | paddingLeft, | ||
unsigned int | xStride, | ||
unsigned int | yStride, | ||
unsigned int | xDilation, | ||
unsigned int | yDilation, | ||
bool | depthwise | ||
) |
Definition at line 71 of file ConvImpl.cpp.
References Decoder< IType >::DecodeTensor(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDataLayout(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetWidthIndex(), NHWC, and Encoder< IType >::Set().
Referenced by RefDepthwiseConvolution2dWorkload::Execute(), and RefConvolution2dWorkload::Execute().
void armnn::CopyArmComputeClTensorData | ( | arm_compute::CLTensor & | dstTensor, |
const T * | srcData | ||
) |
Definition at line 30 of file ClWorkloadUtils.hpp.
References ARMNN_SCOPED_PROFILING_EVENT_CL.
Referenced by ClConstantWorkload::Execute().
void armnn::CopyArmComputeTensorData | ( | arm_compute::Tensor & | dstTensor, |
const T * | srcData | ||
) |
Definition at line 29 of file NeonWorkloadUtils.hpp.
Referenced by InitializeArmComputeTensorData().
void armnn::CopyTensorContentsGeneric | ( | const ITensorHandle * | srcTensor, |
ITensorHandle * | dstTensor, | ||
CopyFunc | copy | ||
) |
Definition at line 47 of file WorkloadUtils.hpp.
References ARMNN_ASSERT, ARMNN_SCOPED_PROFILING_EVENT, TensorShape::GetNumDimensions(), ITensorHandle::GetShape(), ITensorHandle::GetStrides(), IgnoreUnused(), ITensorHandle::Map(), MaxNumOfTensorDimensions, Undefined, and ITensorHandle::Unmap().
Referenced by NeonConvertBf16ToFp32Workload::Execute(), NeonConvertFp32ToFp16Workload::Execute(), NeonConvertFp32ToBf16Workload::Execute(), NeonConvertFp16ToFp32Workload::Execute(), and CopyMemGenericWorkload::Execute().
|
inline |
Definition at line 18 of file ArmComputeUtils.hpp.
References TensorInfo::GetShape(), and NCHW.
|
inline |
Definition at line 57 of file ClContextSchema_generated.h.
References ClContextBuilder::add_programs(), and ClContextBuilder::Finish().
Referenced by CreateClContextDirect(), and ClContextSerializer::Serialize().
|
inline |
Definition at line 65 of file ClContextSchema_generated.h.
References CreateClContext().
OriginsDescriptor armnn::CreateDescriptorForConcatenation | ( | TensorShapeIt | first, |
TensorShapeIt | last, | ||
unsigned int | concatenationDimension | ||
) |
Convenience template to create an OriginsDescriptor to use when creating a ConcatLayer for performing concatenation of a number of input tensors.
Definition at line 258 of file Descriptors.hpp.
References OriginsDescriptor::SetConcatAxis(), and OriginsDescriptor::SetViewOriginCoord().
Referenced by BOOST_AUTO_TEST_CASE(), ConcatDifferentInputOutputQParamTest(), CreateDescriptorForConcat(), and CreateMergerDescriptorForConcatenation().
OriginsDescriptor armnn::CreateMergerDescriptorForConcatenation | ( | TensorShapeIt | first, |
TensorShapeIt | last, | ||
unsigned int | concatenationDimension | ||
) |
Definition at line 248 of file Descriptors.hpp.
References CreateDescriptorForConcatenation().
INetworkPtr armnn::CreateNetworkWithActivationLayer | ( | const ActivationDescriptor & | descriptor, |
const TensorShape & | shape | ||
) |
Definition at line 593 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, and IOutputSlot::SetTensorInfo().
Referenced by BOOST_AUTO_TEST_CASE().
INetworkPtr armnn::CreateNetworkWithArgMinMaxLayer | ( | const ArgMinMaxDescriptor & | descriptor, |
const TensorShape & | shape | ||
) |
Definition at line 614 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), IOutputSlot::SetTensorInfo(), and Signed32.
Referenced by BOOST_AUTO_TEST_CASE().
INetworkPtr armnn::CreateNetworkWithFullyConnectedLayer | ( | const bool | biasEnabled, |
const TensorShape & | inputShape, | ||
const TensorShape & | outputShape | ||
) |
Definition at line 994 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, FullyConnectedDescriptor::m_BiasEnabled, and IOutputSlot::SetTensorInfo().
Referenced by ValidateFullyConnectedLayer().
INetworkPtr armnn::CreateNetworkWithInputOutputLayers | ( | ) |
Definition at line 636 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, and IOutputSlot::SetTensorInfo().
Referenced by BOOST_AUTO_TEST_CASE().
INetworkPtr armnn::CreateNetworkWithSoftmaxLayer | ( | const SoftmaxDescriptor & | descriptor, |
const TensorShape & | shape | ||
) |
Definition at line 1210 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, and IOutputSlot::SetTensorInfo().
Referenced by BOOST_AUTO_TEST_CASE().
|
inline |
Definition at line 118 of file ClContextSchema_generated.h.
References ProgramBuilder::add_binary(), ProgramBuilder::add_name(), and ProgramBuilder::Finish().
Referenced by CreateProgramDirect(), and ClContextSerializer::Serialize().
|
inline |
Definition at line 128 of file ClContextSchema_generated.h.
References CreateProgram().
ConstTensor CreateQuantizedConst | ( | const ConstTensor & | tensor, |
std::vector< uint8_t > & | backing | ||
) |
Definition at line 15 of file NetworkQuantizerUtils.cpp.
References ARMNN_ASSERT_MSG, Float32, TensorInfo::GetDataType(), BaseTensor< MemoryType >::GetInfo(), BaseTensor< MemoryType >::GetMemoryArea(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), QAsymmU8, and QuantizeConstant().
Referenced by QuantizerStrategy::ExecuteStrategy(), and QuantizeConstant().
IConnectableLayer* armnn::CreateStartOfLeakyReluNetwork | ( | INetwork * | network, |
const TensorInfo & | info | ||
) |
Definition at line 1283 of file QuantizerTest.cpp.
References INetwork::AddActivationLayer(), INetwork::AddInputLayer(), IOutputSlot::Connect(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), LeakyReLu, ActivationDescriptor::m_A, ActivationDescriptor::m_B, ActivationDescriptor::m_Function, and IOutputSlot::SetTensorInfo().
Referenced by BOOST_AUTO_TEST_CASE().
BackendsMap CreateSupportedBackends | ( | TensorHandleFactoryRegistry & | handleFactoryRegistry, |
BackendSettings & | backendSettings | ||
) |
Definition at line 1009 of file Network.cpp.
References ARMNN_ASSERT, BackendRegistryInstance(), and BackendSettings::m_SupportedBackends.
Referenced by Optimize().
INetworkPtr armnn::CreatNetwork | ( | ActivationDescriptor | activationDescriptor, |
bool | preventFusing, | ||
float | scale, | ||
int32_t | offset | ||
) |
Definition at line 286 of file FuseActivationTests.cpp.
References IOutputSlot::Connect(), INetwork::Create(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), and IOutputSlot::SetTensorInfo().
void Debug | ( | const TensorInfo & | inputInfo, |
const T * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Definition at line 18 of file Debug.cpp.
References Debug< BFloat16 >(), Debug< float >(), Debug< Half >(), Debug< int16_t >(), Debug< int32_t >(), Debug< int8_t >(), Debug< uint8_t >(), TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), and TensorInfo::GetShape().
Referenced by RefDebugWorkload< DataType >::Execute().
template void armnn::Debug< BFloat16 > | ( | const TensorInfo & | inputInfo, |
const BFloat16 * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< float > | ( | const TensorInfo & | inputInfo, |
const float * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< Half > | ( | const TensorInfo & | inputInfo, |
const Half * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< int16_t > | ( | const TensorInfo & | inputInfo, |
const int16_t * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< int32_t > | ( | const TensorInfo & | inputInfo, |
const int32_t * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< int8_t > | ( | const TensorInfo & | inputInfo, |
const int8_t * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
template void armnn::Debug< uint8_t > | ( | const TensorInfo & | inputInfo, |
const uint8_t * | inputData, | ||
LayerGuid | guid, | ||
const std::string & | layerName, | ||
unsigned int | slotIndex | ||
) |
Referenced by Debug().
void DepthToSpace | ( | const TensorInfo & | inputInfo, |
const DepthToSpaceDescriptor & | descriptor, | ||
const void * | inputData, | ||
void * | outputData, | ||
unsigned int | dataTypeSize | ||
) |
Definition at line 18 of file DepthToSpace.cpp.
References ARMNN_ASSERT, DepthToSpace(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), TensorShape::GetNumElements(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), SpaceToDepthDescriptor::m_BlockSize, SpaceToDepthDescriptor::m_DataLayout, NCHW, and armnnUtils::Permute().
Referenced by DepthToSpace().
void Dequantize | ( | Decoder< float > & | inputDecoder, |
Encoder< float > & | outputEncoder, | ||
const TensorInfo & | inputInfo, | ||
const TensorInfo & | outputInfo | ||
) |
Definition at line 13 of file Dequantize.cpp.
References ARMNN_ASSERT, Decoder< IType >::Get(), TensorInfo::GetNumElements(), IgnoreUnused(), and Encoder< IType >::Set().
std::vector<float> armnn::Dequantize | ( | const T * | quant, |
const TensorInfo & | info | ||
) |
u8 helpers
Definition at line 89 of file RefWorkloadUtils.hpp.
References Dequantize(), TensorInfo::GetNumElements(), TensorInfo::GetQuantizationOffset(), and TensorInfo::GetQuantizationScale().
|
inline |
Definition at line 100 of file RefWorkloadUtils.hpp.
References TensorInfo::GetNumElements(), TensorInfo::GetQuantizationOffset(), and TensorInfo::GetQuantizationScale().
float Dequantize | ( | QuantizedType | value, |
float | scale, | ||
int32_t | offset | ||
) |
Dequantize an 8-bit data type into a floating point data type.
value | - The value to dequantize. |
scale | - The scale (must be non-zero). |
offset | - The offset. |
Definition at line 46 of file TypesUtils.cpp.
References ARMNN_ASSERT.
Referenced by SelectiveQuantizer< T, DoQuantize >::Dequantize(), Dequantize(), and TensorPrinter::operator()().
void DetectionPostProcess | ( | const TensorInfo & | boxEncodingsInfo, |
const TensorInfo & | scoresInfo, | ||
const TensorInfo & | anchorsInfo, | ||
const TensorInfo & | detectionBoxesInfo, | ||
const TensorInfo & | detectionClassesInfo, | ||
const TensorInfo & | detectionScoresInfo, | ||
const TensorInfo & | numDetectionsInfo, | ||
const DetectionPostProcessDescriptor & | desc, | ||
Decoder< float > & | boxEncodings, | ||
Decoder< float > & | scores, | ||
Decoder< float > & | anchors, | ||
float * | detectionBoxes, | ||
float * | detectionClasses, | ||
float * | detectionScores, | ||
float * | numDetections | ||
) |
Definition at line 140 of file DetectionPostProcess.cpp.
References AllocateOutputData(), anchors(), ARMNN_ASSERT, boxEncodings(), GenerateRangeK(), Decoder< IType >::Get(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), IgnoreUnused(), DetectionPostProcessDescriptor::m_DetectionsPerClass, DetectionPostProcessDescriptor::m_MaxClassesPerDetection, DetectionPostProcessDescriptor::m_MaxDetections, DetectionPostProcessDescriptor::m_NmsIouThreshold, DetectionPostProcessDescriptor::m_NmsScoreThreshold, DetectionPostProcessDescriptor::m_NumClasses, DetectionPostProcessDescriptor::m_ScaleH, DetectionPostProcessDescriptor::m_ScaleW, DetectionPostProcessDescriptor::m_ScaleX, DetectionPostProcessDescriptor::m_ScaleY, DetectionPostProcessDescriptor::m_UseRegularNms, NonMaxSuppression(), numeric_cast(), scores(), and TopKSort().
Referenced by DetectionPostProcessTestImpl().
void armnn::ExtractJsonObjects | ( | unsigned int | inferenceIndex, |
const Event * | parentEvent, | ||
JsonChildObject & | parentObject, | ||
std::map< const Event *, std::vector< const Event *>> | descendantsMap | ||
) |
Definition at line 285 of file Profiling.cpp.
References JsonChildObject::AddChild(), JsonChildObject::AddMeasurement(), ARMNN_ASSERT, Event, JsonChildObject::GetChild(), Event::GetMeasurements(), Measurement, JsonChildObject::NumChildren(), JsonChildObject::SetType(), and JsonChildObject::SetUnit().
Referenced by ProfilerImpl::Print().
void armnn::FakeQuantization | ( | const float * | inputData, |
float * | outputData, | ||
uint32_t | numElements, | ||
float | min, | ||
float | max | ||
) |
Definition at line 17 of file RefFakeQuantizationFloat32Workload.cpp.
References numeric_cast().
bool armnn::FalseFunc | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
bool armnn::FalseFuncF16 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 70 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseFuncF32 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 78 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseFuncI32 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 94 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseFuncU8 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 86 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseInputFuncF16 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 110 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseInputFuncF32 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 102 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseOutputFuncF16 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 126 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
bool armnn::FalseOutputFuncF32 | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
Definition at line 118 of file LayerSupportCommon.hpp.
References IgnoreUnused(), and SetValueChecked().
void Fill | ( | Encoder< float > & | output, |
const TensorShape & | desiredOutputShape, | ||
const float | value | ||
) |
Creates a tensor and fills it with a scalar value.
Definition at line 13 of file Fill.cpp.
References TensorShape::GetNumElements(), and Encoder< IType >::Set().
std::vector<Measurement> armnn::FindKernelMeasurements | ( | const Event * | event | ) |
Measurement armnn::FindMeasurement | ( | const std::string & | name, |
const Event * | event | ||
) |
Definition at line 44 of file Profiling.cpp.
References ARMNN_ASSERT, and Event::GetMeasurements().
Referenced by ProfilerImpl::AnalyzeEventSequenceAndWriteResults(), and ProfilerImpl::CalculateProfilingEventStats().
|
inline |
Definition at line 171 of file ClContextSchema_generated.h.
References ClContextIdentifier().
|
inline |
Definition at line 177 of file ClContextSchema_generated.h.
References ClContextIdentifier().
void armnn::ForEachLayerInput | ( | LayerSelectionInfo::LayerInfoContainer & | layerInfos, |
LayerSelectionInfo & | layerInfo, | ||
Delegate | function | ||
) |
Definition at line 263 of file SubgraphViewSelector.cpp.
References ARMNN_ASSERT_MSG, and Layer::GetInputSlots().
Referenced by AssignSplitId(), and IsReadyForSplitAssignment().
void armnn::ForEachLayerOutput | ( | LayerSelectionInfo::LayerInfoContainer & | layerInfos, |
LayerSelectionInfo & | layerInfo, | ||
Delegate | function | ||
) |
Definition at line 284 of file SubgraphViewSelector.cpp.
References Layer::GetOutputSlots().
Referenced by SubgraphViewSelector::SelectSubgraphs().
void FullyConnected | ( | const TensorShape & | rInputShape, |
Decoder< float > & | rInputDecoder, | ||
const TensorShape & | rOutputShape, | ||
Encoder< float > & | rOutputEncoder, | ||
const TensorShape & | rWeightsShape, | ||
Decoder< float > & | rWeightDecoder, | ||
Decoder< float > & | rBiasDecoder, | ||
const bool | biasEnabled, | ||
const unsigned int | K, | ||
const bool | transposeWeights | ||
) |
Performs a matrix multiplication and optionally adds a bias.
Definition at line 13 of file FullyConnected.cpp.
References Decoder< IType >::DecodeTensor(), and Encoder< IType >::Set().
void armnn::FuseActivationIntoPreviousLayerTest | ( | ActivationDescriptor | activationDescriptor, |
float | tolerance, | ||
Compute | backendId, | ||
float | scale = 1.f , |
||
int32_t | offset = 0 |
||
) |
Definition at line 335 of file FuseActivationTests.cpp.
bool armnn::FuseActivationSimpleTest | ( | ActivationDescriptor | activationDescriptor, |
Compute | backendId, | ||
float | scale = 1.f , |
||
int32_t | offset = 0 |
||
) |
Definition at line 431 of file FuseActivationTests.cpp.
Referenced by BOOST_AUTO_TEST_CASE().
LayerType* armnn::FuseLayerWithoutParameters | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 91 of file ArmComputeSubgraphUtils.hpp.
References Graph::AddLayer(), OptimizationViews::AddSubstitution(), CreateInputsFrom(), CreateOutputsFrom(), and OptimizationViews::GetGraph().
LayerType* armnn::FuseLayerWithParameters | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 111 of file ArmComputeSubgraphUtils.hpp.
References Graph::AddLayer(), OptimizationViews::AddSubstitution(), CreateInputsFrom(), CreateOutputsFrom(), and OptimizationViews::GetGraph().
Referenced by FuseLayerWithWeightsAndBiases().
LayerType* armnn::FuseLayerWithWeightsAndBiases | ( | OptimizationViews & | optimizationViews, |
LayerType * | baseLayer, | ||
ActivationLayer * | activationLayer, | ||
ActivationDescriptor & | activationDesc, | ||
std::string | name | ||
) |
Definition at line 132 of file ArmComputeSubgraphUtils.hpp.
References FuseLayerWithParameters().
void Gather | ( | const TensorInfo & | paramsInfo, |
const TensorInfo & | indicesInfo, | ||
const TensorInfo & | outputInfo, | ||
Decoder< float > & | params, | ||
const int32_t * | indices, | ||
Encoder< float > & | output, | ||
const int32_t | axis | ||
) |
Definition at line 17 of file Gather.cpp.
References ARMNN_ASSERT, Decoder< IType >::Get(), TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), IgnoreUnused(), numeric_cast(), and Encoder< IType >::Set().
void armnn::GatherTensorHandlePairs | ( | const DescriptorType & | descriptor, |
std::vector< std::pair< SrcTensorHandleType *, DstTensorHandleType *>> & | tensorHandlePairs | ||
) |
Definition at line 190 of file WorkloadUtils.hpp.
References ConvertMaskToACLFormat(), ConvertWeightTensorFromArmnnToAcl(), ConvertWeightTensorInfoFromArmnnToAcl(), PermuteTensor(), and ReshapeWeightsForAcl().
Referenced by CopyMemGenericWorkload::CopyMemGenericWorkload(), NeonConvertBf16ToFp32Workload::NeonConvertBf16ToFp32Workload(), NeonConvertFp16ToFp32Workload::NeonConvertFp16ToFp32Workload(), NeonConvertFp32ToBf16Workload::NeonConvertFp32ToBf16Workload(), and NeonConvertFp32ToFp16Workload::NeonConvertFp32ToFp16Workload().
std::vector<unsigned int> armnn::GenerateRangeK | ( | unsigned int | k | ) |
Definition at line 17 of file DetectionPostProcess.cpp.
Referenced by DetectionPostProcess(), and NonMaxSuppression().
constexpr char const* armnn::GetActivationFunctionAsCString | ( | ActivationFunction | activation | ) |
Definition at line 27 of file TypesUtils.hpp.
References Abs, BoundedReLu, Elu, HardSwish, LeakyReLu, Linear, ReLu, Sigmoid, SoftReLu, Sqrt, Square, and TanH.
Referenced by StringifyLayerParameters< ActivationDescriptor >::Serialize().
constexpr char const* armnn::GetArgMinMaxFunctionAsCString | ( | ArgMinMaxFunction | function | ) |
Definition at line 47 of file TypesUtils.hpp.
Definition at line 25 of file WorkloadData.cpp.
References ARMNN_ASSERT_MSG, ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, BFloat16, CHECK_LOCATION, TensorInfo::GetDataType(), GetDataTypeName(), TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetQuantizationDim(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::GetQuantizationScales(), TensorInfo::GetShape(), OptionalBase::has_value(), TensorInfo::HasMultipleQuantizationScales(), TensorInfo::HasPerAxisQuantization(), info, TensorInfo::IsQuantized(), IsQuantized8BitType(), TensorInfo::IsTypeSpaceMatch(), WorkloadInfo::m_InputTensorInfos, WorkloadInfo::m_OutputTensorInfos, OptionalReferenceSwitch< IsReference, T >::value(), and OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().
Referenced by BOOST_AUTO_TEST_CASE(), CompareDepthwiseConvolution2dTestImpl(), FullyConnectedQueueDescriptor::Validate(), Convolution2dQueueDescriptor::Validate(), DepthwiseConvolution2dQueueDescriptor::Validate(), and TransposeConvolution2dQueueDescriptor::Validate().
|
inline |
Definition at line 14 of file LayerSupportRules.hpp.
References ARMNN_ASSERT_MSG, Float16, Float32, QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, Signed32, and OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().
Referenced by BiasAndWeightsTypesCompatible::BiasAndWeightsTypesCompatible(), BiasAndWeightsTypesMatch::BiasAndWeightsTypesMatch(), and FullyConnectedTest().
|
inline |
Definition at line 140 of file ClContextSchema_generated.h.
Referenced by ClContextDeserializer::DeserializeFromBinary().
constexpr char const* armnn::GetComparisonOperationAsCString | ( | ComparisonOperation | operation | ) |
Definition at line 57 of file TypesUtils.hpp.
References Equal, Greater, GreaterOrEqual, Less, LessOrEqual, and NotEqual.
Referenced by RefComparisonWorkload::Execute().
constexpr char const* armnn::GetComputeDeviceAsCString | ( | Compute | compute | ) |
Deprecated function that will be removed together with the Compute enum.
Definition at line 34 of file BackendId.hpp.
References CpuAcc, CpuRef, and GpuAcc.
Referenced by BOOST_AUTO_TEST_CASE(), GetSuitableBackendRegistered(), and operator<<().
constexpr const char* armnn::GetDataLayoutName | ( | DataLayout | dataLayout | ) |
Definition at line 203 of file TypesUtils.hpp.
Referenced by MakeTensorShape(), StringifyLayerParameters< Convolution2dDescriptor >::Serialize(), StringifyLayerParameters< BatchNormalizationDescriptor >::Serialize(), StringifyLayerParameters< DepthwiseConvolution2dDescriptor >::Serialize(), StringifyLayerParameters< Pooling2dDescriptor >::Serialize(), StringifyLayerParameters< NormalizationDescriptor >::Serialize(), StringifyLayerParameters< L2NormalizationDescriptor >::Serialize(), StringifyLayerParameters< BatchToSpaceNdDescriptor >::Serialize(), StringifyLayerParameters< ResizeBilinearDescriptor >::Serialize(), StringifyLayerParameters< ResizeDescriptor >::Serialize(), StringifyLayerParameters< SpaceToBatchNdDescriptor >::Serialize(), StringifyLayerParameters< SpaceToDepthDescriptor >::Serialize(), StringifyLayerParameters< StridedSliceDescriptor >::Serialize(), and StringifyLayerParameters< TransposeConvolution2dDescriptor >::Serialize().
constexpr const char* armnn::GetDataTypeName | ( | DataType | dataType | ) |
Definition at line 180 of file TypesUtils.hpp.
References ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, BFloat16, Boolean, Float16, Float32, QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, QuantizedSymm8PerAxis, Signed32, and Signed64.
Referenced by AttemptBackendAssignment(), BOOST_AUTO_TEST_CASE(), BOOST_AUTO_TEST_CASE(), CompareConstTensor(), GetBiasDataType(), TfLiteParserImpl::GetBuffer(), RefTransposeWorkload< DataType >::GetName(), RefPermuteWorkload< DataType >::GetName(), RefDebugWorkload< DataType >::GetName(), armnnUtils::GetPerAxisParams(), LayerVerifierBase::VerifyConstTensors(), LayerVerifierBase::VerifyNameAndConnections(), and VerifyTensorInfoDataType().
constexpr unsigned int armnn::GetDataTypeSize | ( | DataType | dataType | ) |
Definition at line 126 of file TypesUtils.hpp.
References ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, BFloat16, Boolean, Float16, Float32, QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, QuantizedSymm8PerAxis, Signed32, and Signed64.
Referenced by BOOST_AUTO_TEST_CASE(), RefStridedSliceWorkload::Execute(), RefDepthToSpaceWorkload::Execute(), RefSliceWorkload::Execute(), TensorInfo::GetNumBytes(), GetUnpaddedTensorStrides(), ITfParser::TfParserImpl::ParseConst(), and PermuteTensor().
Definition at line 110 of file Profiling.cpp.
Referenced by ProfilerImpl::AnalyzeEventSequenceAndWriteResults().
Definition at line 111 of file Profiling.cpp.
Graph & GetGraphForTesting | ( | IOptimizedNetwork * | optNet | ) |
Definition at line 25 of file TestUtils.cpp.
References IOptimizedNetwork::pOptimizedNetworkImpl.
Referenced by BOOST_AUTO_TEST_CASE(), BOOST_FIXTURE_TEST_CASE(), and CheckRelatedLayers().
LayerSupportHandle GetILayerSupportByBackendId | ( | const armnn::BackendId & | backend | ) |
Convenience function to retrieve the ILayerSupportHandle for a backend.
Definition at line 15 of file BackendHelper.cpp.
References BackendRegistryInstance(), BackendRegistry::GetFactory(), and BackendRegistry::IsBackendRegistered().
Referenced by BOOST_AUTO_TEST_CASE(), and LayerSupportHandle::LayerSupportHandle().
const DataType* armnn::GetInputTensorData | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 35 of file RefWorkloadUtils.hpp.
References GetOutputTensorData(), and ITensorHandle::Map().
const BFloat16* armnn::GetInputTensorDataBFloat16 | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 73 of file RefWorkloadUtils.hpp.
Referenced by RefConvertBf16ToFp32Workload::Execute().
const float* armnn::GetInputTensorDataFloat | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 49 of file RefWorkloadUtils.hpp.
Referenced by RefConvertFp32ToBf16Workload::Execute(), RefFakeQuantizationFloat32Workload::Execute(), and RefConvertFp32ToFp16Workload::Execute().
const Half* armnn::GetInputTensorDataHalf | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 61 of file RefWorkloadUtils.hpp.
Referenced by RefConvertFp16ToFp32Workload::Execute().
TensorInfo armnn::GetInputTensorInfo | ( | const INetwork * | network | ) |
Definition at line 80 of file QuantizerTest.cpp.
References ARMNN_ASSERT_MSG, and INetwork::pNetworkImpl.
Referenced by BOOST_AUTO_TEST_CASE(), BoundedReLuUint8UpperAndLowerBoundTest(), and LoadedNetwork::~LoadedNetwork().
TensorInfo armnn::GetInputTensorInfo | ( | const NetworkImpl * | network | ) |
Definition at line 90 of file QuantizerTest.cpp.
References Activation, Addition, ArgMinMax, ARMNN_ASSERT_MSG, BatchNormalization, BatchToSpaceNd, BOOST_AUTO_TEST_SUITE(), BoundedReLu, Comparison, Constant, Convolution2d, DepthToSpace, DepthwiseConvolution2d, Elu, Fill, FullyConnected, g_AsymmS8QuantizationBase, g_AsymmU8QuantizationBase, g_SymmS16QuantizationBase, g_SymmS8QuantizationBase, g_TestTolerance, IInputSlot::GetConnection(), TensorInfo::GetDataType(), NetworkImpl::GetGraph(), BaseTensor< MemoryType >::GetInfo(), Graph::GetInputLayers(), IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::GetShape(), IOutputSlot::GetTensorInfo(), IConnectableLayer::GetType(), HardSwish, OptionalBase::has_value(), IgnoreUnused(), info, Input, InstanceNormalization, LeakyReLu, LogSoftmax, ActivationDescriptor::m_Function, ArgMinMaxDescriptor::m_Function, Max, Output, Permute, Pooling2d, QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, Reshape, Resize, Signed32, Slice, Softmax, SpaceToBatchNd, SpaceToDepth, Splitter, Stack, StridedSlice, TanH, TransposeConvolution2d, and OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().
char const * GetLayerTypeAsCString | ( | LayerType | type | ) |
Definition at line 13 of file InternalTypes.cpp.
References ARMNN_ASSERT_MSG, and LIST_OF_LAYER_TYPE.
Referenced by AttemptBackendAssignment(), CheckScaleSetOnQuantizedType(), Layer::InferOutputShapes(), Graph::InferTensorInfos(), Graph::Print(), ReturnWithError(), Layer::SerializeLayerParameters(), Graph::SerializeToDot(), ElementwiseBaseLayer::ValidateTensorShapesFromInputs(), ElementwiseUnaryLayer::ValidateTensorShapesFromInputs(), and Layer::VerifyLayerConnections().
constexpr char const* armnn::GetLogicalBinaryOperationAsCString | ( | LogicalBinaryOperation | operation | ) |
Definition at line 85 of file TypesUtils.hpp.
References LogicalAnd, and LogicalOr.
Referenced by RefLogicalBinaryWorkload::Execute().
ModelOptions & GetModelOptionsForTesting | ( | IOptimizedNetwork * | optNet | ) |
Definition at line 30 of file TestUtils.cpp.
References IOptimizedNetwork::pOptimizedNetworkImpl.
Referenced by BOOST_AUTO_TEST_CASE(), and CheckRelatedLayers().
constexpr const char* armnn::GetNormalizationAlgorithmChannelAsCString | ( | NormalizationAlgorithmChannel | channel | ) |
Definition at line 213 of file TypesUtils.hpp.
References Across, and Within.
Referenced by StringifyLayerParameters< NormalizationDescriptor >::Serialize().
constexpr const char* armnn::GetNormalizationAlgorithmMethodAsCString | ( | NormalizationAlgorithmMethod | method | ) |
Definition at line 223 of file TypesUtils.hpp.
References LocalBrightness, and LocalContrast.
Referenced by StringifyLayerParameters< NormalizationDescriptor >::Serialize().
unsigned int armnn::GetOffset | ( | const TensorShape & | shape, |
unsigned int | b, | ||
unsigned int | h, | ||
unsigned int | w, | ||
unsigned int | c, | ||
const DataLayoutIndexed & | dataLayout | ||
) |
Definition at line 15 of file SpaceToBatchNd.cpp.
References DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDataLayout(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetWidthIndex(), and NHWC.
Referenced by SpaceToBatchNd(), and SpaceToDepth().
constexpr char const* armnn::GetOutputShapeRoundingAsCString | ( | OutputShapeRounding | rounding | ) |
Definition at line 106 of file TypesUtils.hpp.
References Ceiling, and Floor.
Referenced by StringifyLayerParameters< Pooling2dDescriptor >::Serialize().
DataType * GetOutputTensorData | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 147 of file ClWorkloadUtils.hpp.
References ITensorHandle::Map().
Referenced by GetInputTensorData(), and SetNeonSliceData().
BFloat16* armnn::GetOutputTensorDataBFloat16 | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 79 of file RefWorkloadUtils.hpp.
Referenced by RefConvertFp32ToBf16Workload::Execute().
float* armnn::GetOutputTensorDataFloat | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 55 of file RefWorkloadUtils.hpp.
Referenced by RefConvertBf16ToFp32Workload::Execute(), RefFakeQuantizationFloat32Workload::Execute(), and RefConvertFp16ToFp32Workload::Execute().
Half* armnn::GetOutputTensorDataHalf | ( | unsigned int | idx, |
const PayloadType & | data | ||
) |
Definition at line 67 of file RefWorkloadUtils.hpp.
Referenced by RefConvertFp32ToFp16Workload::Execute().
constexpr char const* armnn::GetPaddingMethodAsCString | ( | PaddingMethod | method | ) |
Definition at line 116 of file TypesUtils.hpp.
References Exclude, and IgnoreValue.
Referenced by StringifyLayerParameters< Pooling2dDescriptor >::Serialize().
constexpr char const* armnn::GetPoolingAlgorithmAsCString | ( | PoolingAlgorithm | pooling | ) |
Definition at line 95 of file TypesUtils.hpp.
References Average, L2, and Max.
Referenced by StringifyLayerParameters< Pooling2dDescriptor >::Serialize().
size_t armnn::GetProfilerEventSequenceSize | ( | armnn::IProfiler * | profiler | ) |
Definition at line 22 of file ProfilerTests.cpp.
References BOOST_AUTO_TEST_SUITE(), ProfilerManager::GetInstance(), ProfilerManager::GetProfiler(), and ProfilerManager::RegisterProfiler().
Referenced by BOOST_AUTO_TEST_CASE().
profiling::ProfilingService & GetProfilingService | ( | armnn::RuntimeImpl * | runtime | ) |
Definition at line 35 of file TestUtils.cpp.
Referenced by BOOST_AUTO_TEST_CASE(), CheckRelatedLayers(), and VerifyPostOptimisationStructureTestImpl().
constexpr const char* armnn::GetResizeMethodAsCString | ( | ResizeMethod | method | ) |
Definition at line 233 of file TypesUtils.hpp.
References Bilinear, and NearestNeighbor.
Referenced by StringifyLayerParameters< ResizeDescriptor >::Serialize().
|
inline |
Definition at line 144 of file ClContextSchema_generated.h.
constexpr char const* armnn::GetStatusAsCString | ( | Status | status | ) |
Definition at line 17 of file TypesUtils.hpp.
References Failure, and Success.
Referenced by operator<<().
|
inline |
float32 helpers
Definition at line 26 of file RefWorkloadUtils.hpp.
References RefTensorHandle::GetTensorInfo().
Referenced by BatchNormImpl(), Concatenate(), RefStridedSliceWorkload::Execute(), RefDepthToSpaceWorkload::Execute(), RefConvertBf16ToFp32Workload::Execute(), RefFakeQuantizationFloat32Workload::Execute(), RefSpaceToBatchNdWorkload::Execute(), RefSpaceToDepthWorkload::Execute(), RefFillWorkload::Execute(), RefFloorWorkload::Execute(), RefConvertFp16ToFp32Workload::Execute(), RefLogSoftmaxWorkload::Execute(), RefConvertFp32ToBf16Workload::Execute(), RefConvertFp32ToFp16Workload::Execute(), RefPadWorkload::Execute(), RefActivationWorkload::Execute(), RefReshapeWorkload::Execute(), RefResizeBilinearWorkload::Execute(), RefResizeWorkload::Execute(), RefSoftmaxWorkload::Execute(), RefDequantizeWorkload::Execute(), RefStackWorkload::Execute(), RefBatchNormalizationWorkload::Execute(), RefBatchToSpaceNdWorkload::Execute(), RefSliceWorkload::Execute(), RefInstanceNormalizationWorkload::Execute(), RefDetectionPostProcessWorkload::Execute(), RefArgMinMaxWorkload::Execute(), RefPreluWorkload::Execute(), RefL2NormalizationWorkload::Execute(), RefNormalizationWorkload::Execute(), RefRankWorkload::Execute(), RefReduceWorkload::Execute(), RefLstmWorkload::Execute(), RefMeanWorkload::Execute(), RefPooling2dWorkload::Execute(), RefQLstmWorkload::Execute(), RefElementwiseUnaryWorkload::Execute(), RefGatherWorkload::Execute(), RefComparisonWorkload::Execute(), RefLogicalBinaryWorkload::Execute(), RefLogicalUnaryWorkload::Execute(), RefTransposeWorkload< DataType >::Execute(), RefPermuteWorkload< DataType >::Execute(), RefElementwiseWorkload< Functor, ParentDescriptor, DebugString >::Execute(), RefDebugWorkload< DataType >::Execute(), OutputSlot::GetNumConnections(), InstanceNorm(), OutputSlot::MoveAllConnections(), RefQuantizeWorkload::PostAllocationConfigure(), RefDepthwiseConvolution2dWorkload::PostAllocationConfigure(), RefLogicalBinaryWorkload::PostAllocationConfigure(), RefComparisonWorkload::PostAllocationConfigure(), RefLogicalUnaryWorkload::PostAllocationConfigure(), RefElementwiseUnaryWorkload::PostAllocationConfigure(), RefConvolution2dWorkload::PostAllocationConfigure(), RefConstantWorkload::PostAllocationConfigure(), RefTransposeConvolution2dWorkload::PostAllocationConfigure(), RefFullyConnectedWorkload::PostAllocationConfigure(), RefElementwiseWorkload< Functor, ParentDescriptor, DebugString >::PostAllocationConfigure(), PreluImpl(), Split(), Splitter(), Stack(), SwitchLayer::ValidateTensorShapesFromInputs(), DetectionPostProcessLayer::ValidateTensorShapesFromInputs(), ConcatLayer::ValidateTensorShapesFromInputs(), SplitterLayer::ValidateTensorShapesFromInputs(), QuantizedLstmLayer::ValidateTensorShapesFromInputs(), LstmLayer::ValidateTensorShapesFromInputs(), and QLstmLayer::ValidateTensorShapesFromInputs().
|
inline |
Definition at line 19 of file Timer.hpp.
References GetTimeNow().
Referenced by InferenceModel< IParser, TDataType >::InferenceModel(), InferenceModel< IParser, TDataType >::Run(), RuntimeImpl::RuntimeImpl(), and RuntimeImpl::~RuntimeImpl().
|
inline |
Definition at line 14 of file Timer.hpp.
Referenced by GetTimeDuration(), InferenceModel< IParser, TDataType >::InferenceModel(), InferenceModel< IParser, TDataType >::Run(), RuntimeImpl::RuntimeImpl(), and RuntimeImpl::~RuntimeImpl().
constexpr char const* armnn::GetUnaryOperationAsCString | ( | UnaryOperation | operation | ) |
Definition at line 71 of file TypesUtils.hpp.
References Abs, Exp, LogicalNot, Neg, Rsqrt, and Sqrt.
Referenced by RefElementwiseUnaryWorkload::Execute(), and RefLogicalUnaryWorkload::Execute().
TensorShape GetUnpaddedTensorStrides | ( | const TensorInfo & | tensorInfo | ) |
Definition at line 15 of file CpuTensorHandle.cpp.
References TensorInfo::GetDataType(), GetDataTypeSize(), and TensorInfo::GetShape().
Referenced by RefTensorHandle::GetStrides(), SampleTensorHandle::GetStrides(), and ConstCpuTensorHandle::GetStrides().
std::vector<T> armnn::GetVector | ( | unsigned int | size, |
float | initial, | ||
float | increment | ||
) |
Definition at line 26 of file FuseActivationTests.cpp.
References INetwork::AddAdditionLayer(), INetwork::AddBatchNormalizationLayer(), INetwork::AddConvolution2dLayer(), INetwork::AddDepthwiseConvolution2dLayer(), INetwork::AddDivisionLayer(), INetwork::AddFullyConnectedLayer(), AdditionTest(), INetwork::AddMultiplicationLayer(), INetwork::AddSubtractionLayer(), DivisionTest(), FullyConnectedTest(), IgnoreUnused(), FullyConnectedDescriptor::m_BiasEnabled, DepthwiseConvolution2dDescriptor::m_BiasEnabled, Convolution2dDescriptor::m_DataLayout, DepthwiseConvolution2dDescriptor::m_DataLayout, BatchNormalizationDescriptor::m_DataLayout, Convolution2dDescriptor::m_StrideX, DepthwiseConvolution2dDescriptor::m_StrideX, Convolution2dDescriptor::m_StrideY, DepthwiseConvolution2dDescriptor::m_StrideY, MultiplicationTest(), NHWC, and SubtractionTest().
const std::string GetVersion | ( | ) |
Definition at line 77 of file Utils.cpp.
References ARMNN_VERSION.
|
inline |
Definition at line 14 of file IgnoreUnused.hpp.
Referenced by ConvertFp32ToFp16Layer::Accept(), DebugLayer::Accept(), FakeQuantizationLayer::Accept(), MapLayer::Accept(), UnmapLayer::Accept(), MemCopyLayer::Accept(), ConvertBf16ToFp32Layer::Accept(), MemImportLayer::Accept(), ConvertFp16ToFp32Layer::Accept(), ConvertFp32ToBf16Layer::Accept(), PreCompiledLayer::Accept(), IInferenceTestCaseProvider::AddCommandLineOptions(), AdditionAfterMaxPoolTest(), AdditionBroadcast1ElementTestImpl(), AdditionBroadcastTestImpl(), ArgMinMax(), BOOST_AUTO_TEST_CASE(), BOOST_AUTO_TEST_CASE(), BOOST_FIXTURE_TEST_CASE(), BoundedReLuTestCommon(), BoundedReLuUint8UpperAndLowerBoundTest(), CalculateSlotOptionForOutput(), ParserFlatbuffersSerializeFixture::CheckTensors(), ClassifierTestCase< TTestCaseDatabase, TModel >::ClassifierTestCase(), ClContextControl::ClContextControl(), SpaceToBatchNdLayer::Clone(), SpaceToDepthLayer::Clone(), CompareActivationTestImpl(), CompareAdditionTest(), CompareBatchNormTest(), CompareMultiplicationTest(), ConcatDifferentInputOutputQParamTest(), ConcatTest(), ConcatUint16Test(), ConcatUint8DifferentQParamsTest(), ConcatUint8Test(), ConstantLinearActivationTestCommon(), ConstantsVector2QuantizedLstmInputParams(), ConstantVector2LstmInputParams(), ConvertBf16ToFp32Test(), ConvertFp32ToBf16Test(), Convolution2d3x3Stride2x2BFloat16SmallValueTest(), Convolution2d3x3Stride2x2BFloat16Test(), CopyTensorContentsGeneric(), NeonWorkloadFactory::CreateAbs(), ClWorkloadFactory::CreateAbs(), RefWorkloadFactory::CreateAbs(), MockBackend::CreateBackendProfilingContext(), ClWorkloadFactory::CreateEqual(), NeonWorkloadFactory::CreateEqual(), RefWorkloadFactory::CreateEqual(), ClWorkloadFactory::CreateGreater(), NeonWorkloadFactory::CreateGreater(), RefWorkloadFactory::CreateGreater(), ClWorkloadFactory::CreateRsqrt(), NeonWorkloadFactory::CreateRsqrt(), RefWorkloadFactory::CreateRsqrt(), SampleDynamicTensorHandleFactory::CreateSubTensorHandle(), RefTensorHandleFactory::CreateSubTensorHandle(), SampleDynamicWorkloadFactory::CreateSubTensorHandle(), RefWorkloadFactory::CreateSubTensorHandle(), SampleDynamicTensorHandleFactory::CreateTensorHandle(), RefTensorHandleFactory::CreateTensorHandle(), ClWorkloadFactory::CreateTensorHandle(), RefWorkloadFactory::CreateTensorHandle(), ITensorHandleFactory::CreateTensorHandle(), OutputLayer::CreateTensorHandles(), InputLayer::CreateWorkload(), UnmapLayer::CreateWorkload(), MapLayer::CreateWorkload(), MemCopyLayer::CreateWorkload(), MemImportLayer::CreateWorkload(), MergeLayer::CreateWorkload(), OutputLayer::CreateWorkload(), StandInLayer::CreateWorkload(), QASymm8Decoder::DecodeTensor(), QASymmS8Decoder::DecodeTensor(), QSymmS8Decoder::DecodeTensor(), QSymm16Decoder::DecodeTensor(), BFloat16Decoder::DecodeTensor(), Float16Decoder::DecodeTensor(), Float32Decoder::DecodeTensor(), ScaledInt32Decoder::DecodeTensor(), Int32Decoder::DecodeTensor(), Int32ToInt32tDecoder::DecodeTensor(), BooleanDecoder::DecodeTensor(), BooleanDecoderBool::DecodeTensor(), Dequantize(), SelectiveQuantizer< T, false >::Dequantize(), SelectiveQuantizer< armnn::Half, false >::Dequantize(), SelectiveQuantizer< armnn::BFloat16, false >::Dequantize(), Graph::DetachObservable(), DetectionPostProcess(), DivisionByZeroTest(), ProfilerImpl::EndEvent(), LoadedNetwork::EnqueueWorkload(), RefStridedSliceWorkload::Execute(), QuantizerStrategy::ExecuteStrategy(), StaticRangeStrategy::ExecuteStrategy(), SerializerStrategy::ExecuteStrategy(), DynamicQuantizationStrategy::ExecuteStrategy(), OverrideInputRangeStrategy::ExecuteStrategy(), LayerVerifierBase::ExecuteStrategy(), FakeQuantizationLayer::ExecuteStrategy(), MemCopyLayer::ExecuteStrategy(), MemImportLayer::ExecuteStrategy(), PreCompiledLayer::ExecuteStrategy(), InputLayerStrategy::ExecuteStrategy(), LayerVerifierBaseWithDescriptor< armnn::OriginsDescriptor >::ExecuteStrategy(), LayerVerifierBaseWithDescriptorAndConstants< Descriptor >::ExecuteStrategy(), ExecutionFrame::ExecuteWorkloads(), exit_capture(), FakeQuantizationTest(), FalseFunc(), FalseFuncF16(), FalseFuncF32(), FalseFuncI32(), FalseFuncU8(), FalseInputFuncF16(), FalseInputFuncF32(), FalseOutputFuncF16(), FalseOutputFuncF32(), Gather(), NeonTensorHandleFactory::GetCapabilities(), ITensorHandleFactory::GetCapabilities(), MockCounterDirectory::GetCounter(), MockCounterDirectory::GetCounterSet(), MockCounterDirectory::GetDevice(), armnnSerializer::GetFlatBufferArgMinMaxFunction(), GetImageDataInArmNnLayoutAsNormalizedFloats(), GetInputTensorInfo(), IDeserializer::DeserializerImpl::GetNetworkInputBindingInfo(), IDeserializer::DeserializerImpl::GetNetworkOutputBindingInfo(), IDeserializer::DeserializerImpl::GetNormalizationDescriptor(), LoadedNetwork::GetOutputTensorInfo(), IDeserializer::DeserializerImpl::GetPoolingDescriptor(), MockProfilingConnectionFactory::GetProfilingConnection(), GetVector(), ITensorHandle::Import(), SliceLayer::InferOutputShapes(), StackLayer::InferOutputShapes(), StandInLayer::InferOutputShapes(), ReshapeLayer::InferOutputShapes(), SplitterLayer::InferOutputShapes(), NeonLayerSupport::IsActivationSupported(), MockImportLayerSupport::IsAdditionSupported(), RefLayerSupport::IsArgMinMaxSupported(), RefLayerSupport::IsBatchNormalizationSupported(), RefLayerSupport::IsBatchToSpaceNdSupported(), RefLayerSupport::IsComparisonSupported(), RefLayerSupport::IsConcatSupported(), NeonLayerSupport::IsConvertBf16ToFp32Supported(), NeonLayerSupport::IsConvertFp16ToFp32Supported(), NeonLayerSupport::IsConvertFp32ToBf16Supported(), NeonLayerSupport::IsConvertFp32ToFp16Supported(), RefLayerSupport::IsConvolution2dSupported(), RefLayerSupport::IsDepthToSpaceSupported(), RefLayerSupport::IsDepthwiseConvolutionSupported(), RefLayerSupport::IsDetectionPostProcessSupported(), RefLayerSupport::IsElementwiseUnarySupported(), RefLayerSupport::IsFakeQuantizationSupported(), ClLayerSupport::IsFillSupported(), NeonLayerSupport::IsFillSupported(), RefLayerSupport::IsFillSupported(), NeonLayerSupport::IsFloorSupported(), RefLayerSupport::IsFloorSupported(), MockImportLayerSupport::IsInputSupported(), RefLayerSupport::IsInstanceNormalizationSupported(), RefLayerSupport::IsL2NormalizationSupported(), ClLayerSupport::IsLogicalBinarySupported(), RefLayerSupport::IsLogicalBinarySupported(), RefLayerSupport::IsLogSoftmaxSupported(), RefLayerSupport::IsLstmSupported(), RefLayerSupport::IsNormalizationSupported(), ProfilingStateMachine::IsOneOfStates(), MockImportLayerSupport::IsOutputSupported(), RefLayerSupport::IsPadSupported(), RefLayerSupport::IsPermuteSupported(), RefLayerSupport::IsPooling2dSupported(), RefLayerSupport::IsQLstmSupported(), RefLayerSupport::IsRankSupported(), RefLayerSupport::IsReduceSupported(), ClLayerSupport::IsReshapeSupported(), NeonLayerSupport::IsReshapeSupported(), RefLayerSupport::IsReshapeSupported(), RefLayerSupport::IsResizeSupported(), RefLayerSupport::IsSliceSupported(), RefLayerSupport::IsSoftmaxSupported(), RefLayerSupport::IsSpaceToBatchNdSupported(), RefLayerSupport::IsSpaceToDepthSupported(), ClLayerSupport::IsSplitterSupported(), NeonLayerSupport::IsSplitterSupported(), RefLayerSupport::IsSplitterSupported(), RefLayerSupport::IsStackSupported(), RefLayerSupport::IsStridedSliceSupported(), RefLayerSupport::IsTransposeConvolution2dSupported(), RefLayerSupport::IsTransposeSupported(), Layer::Layer(), LogSoftmax(), MaximumSimpleTest(), MinimumBroadcast1ElementTest1(), StubCommandHandler::operator()(), TestFunctorA::operator()(), TfLiteParserImpl::OutputShapeOfSqueeze(), Pad2dTestCommon(), Pad3dTestCommon(), Pad4dTestCommon(), PadQAsymmTestCommon(), ITfParser::TfParserImpl::ParseAdd(), ITfParser::TfParserImpl::ParseAddN(), ITfParser::TfParserImpl::ParseBiasAdd(), ITfParser::TfParserImpl::ParseConcat(), ITfParser::TfParserImpl::ParseConst(), ITfParser::TfParserImpl::ParseConv2D(), ITfParser::TfParserImpl::ParseDepthwiseConv2D(), ITfParser::TfParserImpl::ParseEqual(), ITfParser::TfParserImpl::ParseExpandDims(), ITfParser::TfParserImpl::ParseFusedBatchNorm(), ITfParser::TfParserImpl::ParseGather(), ITfParser::TfParserImpl::ParseGreater(), ITfParser::TfParserImpl::ParseIdentity(), ITfParser::TfParserImpl::ParseLrn(), ITfParser::TfParserImpl::ParseMatMul(), ITfParser::TfParserImpl::ParseMaximum(), ITfParser::TfParserImpl::ParseMean(), ITfParser::TfParserImpl::ParseMinimum(), ITfParser::TfParserImpl::ParseMul(), ITfParser::TfParserImpl::ParsePad(), ITfParser::TfParserImpl::ParsePlaceholder(), ITfParser::TfParserImpl::ParsePooling2d(), ITfParser::TfParserImpl::ParseRealDiv(), ITfParser::TfParserImpl::ParseRelu(), ITfParser::TfParserImpl::ParseRelu6(), ITfParser::TfParserImpl::ParseReshape(), ITfParser::TfParserImpl::ParseResizeBilinear(), ITfParser::TfParserImpl::ParseRsqrt(), ITfParser::TfParserImpl::ParseShape(), ITfParser::TfParserImpl::ParseSigmoid(), ITfParser::TfParserImpl::ParseSoftmax(), ITfParser::TfParserImpl::ParseSoftplus(), ITfParser::TfParserImpl::ParseSplit(), ITfParser::TfParserImpl::ParseSqueeze(), ITfParser::TfParserImpl::ParseStack(), ITfParser::TfParserImpl::ParseStridedSlice(), ITfParser::TfParserImpl::ParseSub(), ITfParser::TfParserImpl::ParseTanh(), ITfParser::TfParserImpl::ParseTranspose(), PermuteInputsForConcat(), PermuteTensorData(), PreluTest(), IInferenceTestCaseProvider::ProcessCommandLineOptions(), YoloTestCase< Model >::ProcessResult(), SelectiveQuantizer< T, false >::Quantize(), SelectiveQuantizer< armnn::Half, false >::Quantize(), SelectiveQuantizer< armnn::BFloat16, false >::Quantize(), RankTest(), MockProfilingConnection::ReadPacket(), TestProfilingConnectionArmnnError::ReadPacket(), TestProfilingConnectionBadAckPacket::ReadPacket(), CounterDirectory::RegisterCounter(), MockCounterDirectory::RegisterCounter(), OptimizeInverseConversionsImpl::Run(), OptimizeInversePermutesImpl< PermuteType >::Run(), SquashEqualSiblingsImpl< Comparable >::Run(), FuseBatchNorm< ConvLayer, ArmnnType, T >::Run(), ConvertConstants< Converter, Predicate >::Run(), MockSendCounterPacket::SendCounterDirectoryPacket(), MockSendCounterPacket::SendPeriodicCounterCapturePacket(), MockSendCounterPacket::SendPeriodicCounterSelectionPacket(), ILocalPacketHandler::SetConnection(), TypedIterator< const float, Decoder< float > >::SetIndex(), SetLogFilter(), SimpleActivationTest(), SimpleConvertFp16ToFp32Test(), SimpleConvertFp32ToFp16Test(), SimpleConvolution2d3x3NhwcTestCommon(), SimpleConvolution2d3x3Stride2x2TestCommon(), SimpleConvolution2dNhwcTestImpl(), SimpleConvolution2dTestImpl(), SimpleFillTest(), SimpleFloorTest(), SimplePermuteTestImpl(), SimpleTransposeTestImpl(), Slice(), SqrtNNTest(), OpenClTimer::Start(), Graph::SubstituteSubgraph(), TestDynamicBackendId(), TrueFunc(), InputLayerVisitor::VisitInputLayer(), OverrideInputRangeVisitor::VisitInputLayer(), MockProfilingServiceStatus::WaitForProfilingServiceActivation(), TestProfilingConnectionBase::WritePacket(), Graph::LayerInGraph< InputLayer >::~LayerInGraph(), Graph::LayerInGraph< OutputLayer >::~LayerInGraph(), and ScopedProfilingEvent::~ScopedProfilingEvent().
|
inline |
Definition at line 90 of file ClWorkloadUtils.hpp.
References ARMNN_ASSERT.
|
inline |
Definition at line 35 of file NeonWorkloadUtils.hpp.
References ARMNN_ASSERT, ARMNN_ASSERT_MSG, ARMNN_FALLTHROUGH, ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, CopyArmComputeTensorData(), Float16, Float32, ConstCpuTensorHandle::GetConstTensor(), TensorInfo::GetDataType(), ConstCpuTensorHandle::GetTensorInfo(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, QuantizedSymm8PerAxis, and Signed32.
std::vector< ConvertBf16ToFp32Layer * > InsertConvertBf16ToFp32LayersBefore | ( | Graph & | graph, |
Layer & | layer, | ||
bool | expectCorrectInputType | ||
) |
Definition at line 51 of file NetworkUtils.cpp.
References Layer::BeginInputSlots(), BFloat16, Layer::EndInputSlots(), Float32, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumInputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by AttemptBackendAssignment().
std::vector< ConvertFp16ToFp32Layer * > InsertConvertFp16ToFp32LayersBefore | ( | Graph & | graph, |
Layer & | layer, | ||
bool | expectCorrectInputType | ||
) |
Definition at line 129 of file NetworkUtils.cpp.
References Layer::BeginInputSlots(), Layer::EndInputSlots(), Float16, Float32, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumInputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by AttemptBackendAssignment(), BOOST_AUTO_TEST_CASE(), and ConvertFp32NetworkToFp16Impl::Run().
std::vector< ConvertFp32ToBf16Layer * > InsertConvertFp32ToBf16LayersAfter | ( | Graph & | graph, |
Layer & | layer | ||
) |
Definition at line 168 of file NetworkUtils.cpp.
References BFloat16, Float32, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by AttemptBackendAssignment().
std::vector< ConvertFp32ToBf16Layer * > InsertConvertFp32ToBf16LayersBefore | ( | Graph & | graph, |
Layer & | layer, | ||
bool | expectCorrectInputType | ||
) |
Definition at line 90 of file NetworkUtils.cpp.
References Layer::BeginInputSlots(), BFloat16, Layer::EndInputSlots(), Float32, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumInputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by ConvertFp32NetworkToBf16Impl::Run().
std::vector< ConvertFp32ToFp16Layer * > InsertConvertFp32ToFp16LayersAfter | ( | Graph & | graph, |
Layer & | layer | ||
) |
Definition at line 201 of file NetworkUtils.cpp.
References Float16, Float32, InputSlot::GetConnectedOutputSlot(), TensorInfo::GetDataType(), Layer::GetInputSlot(), Layer::GetName(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), TensorInfo::SetDataType(), and OutputSlot::SetTensorInfo().
Referenced by AttemptBackendAssignment(), BOOST_AUTO_TEST_CASE(), and ConvertFp32NetworkToFp16Impl::Run().
std::vector< DebugLayer * > InsertDebugLayerAfter | ( | Graph & | graph, |
Layer & | layer | ||
) |
Definition at line 234 of file NetworkUtils.cpp.
References ARMNN_ASSERT, Layer::BeginOutputSlots(), CpuRef, Layer::EndOutputSlots(), InputSlot::GetConnectedOutputSlot(), Layer::GetInputSlot(), Layer::GetNameStr(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), OutputSlot::GetTensorInfo(), Graph::InsertNewLayer(), Layer::SetBackendId(), and OutputSlot::SetTensorInfo().
Referenced by DynamicQuantizationStrategy::FinishStrategy(), and AddDebugImpl::Run().
void InstanceNorm | ( | const InstanceNormalizationQueueDescriptor & | data, |
Decoder< float > & | inputDecoder, | ||
Encoder< float > & | outputEncoder | ||
) |
Definition at line 18 of file InstanceNorm.cpp.
References Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetIndex(), TensorInfo::GetShape(), GetTensorInfo(), DataLayoutIndexed::GetWidthIndex(), InstanceNormalizationDescriptor::m_Beta, InstanceNormalizationDescriptor::m_DataLayout, InstanceNormalizationDescriptor::m_Eps, InstanceNormalizationDescriptor::m_Gamma, QueueDescriptor::m_Inputs, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and Encoder< IType >::Set().
Referenced by RefInstanceNormalizationWorkload::Execute().
float IntersectionOverUnion | ( | const float * | boxI, |
const float * | boxJ | ||
) |
Definition at line 30 of file DetectionPostProcess.cpp.
Referenced by BOOST_AUTO_TEST_CASE(), and NonMaxSuppression().
bool IsActivationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 69 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
Referenced by BOOST_AUTO_TEST_CASE().
bool IsAdditionSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 79 of file LayerSupport.cpp.
References CheckTensorDataTypesEqual(), and FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsArgMinMaxSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const ArgMinMaxDescriptor & | descriptor, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Definition at line 94 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsBatchNormalizationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const TensorInfo & | mean, | ||
const TensorInfo & | var, | ||
const TensorInfo & | beta, | ||
const TensorInfo & | gamma, | ||
const BatchNormalizationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 104 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsBatchToSpaceNdSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const BatchToSpaceNdDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 126 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsBFloat16 | ( | const WorkloadInfo & | info | ) |
Definition at line 54 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateDebug(), RefWorkloadFactory::CreatePermute(), and RefWorkloadFactory::CreateTranspose().
bool armnn::IsConcatSupported | ( | const BackendId & | backend, |
const std::vector< const TensorInfo *> | inputs, | ||
const TensorInfo & | output, | ||
const OriginsDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by IsConcatSupported().
bool armnn::IsConcatSupported | ( | const BackendId & | backend, |
std::vector< const TensorInfo *> | inputs, | ||
const TensorInfo & | output, | ||
const OriginsDescriptor & | descriptor, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Definition at line 140 of file LayerSupport.cpp.
References ARMNN_ASSERT, FORWARD_LAYER_SUPPORT_FUNC, and IsConcatSupported().
bool IsConstantSupported | ( | const BackendId & | backend, |
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 152 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsConvertFp16ToFp32Supported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 160 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsConvertFp32ToFp16Supported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 169 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsConvolution2dSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const Convolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 178 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsDataType | ( | const WorkloadInfo & | info | ) |
Definition at line 33 of file RefWorkloadFactory.cpp.
References WorkloadInfo::m_InputTensorInfos, and WorkloadInfo::m_OutputTensorInfos.
bool IsDebugSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 190 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsDepthwiseConvolutionSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const DepthwiseConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 199 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC, DepthwiseConvolution2dDescriptor::m_DilationX, and DepthwiseConvolution2dDescriptor::m_DilationY.
bool IsDequantizeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 232 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC, and IsDetectionPostProcessSupported().
bool armnn::IsDetectionPostProcessSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const DetectionPostProcessDescriptor & | descriptor, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Referenced by IsDequantizeSupported().
bool IsDivisionSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 248 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsEqualSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 258 of file LayerSupport.cpp.
References Equal, and FORWARD_LAYER_SUPPORT_FUNC.
bool IsFakeQuantizationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const FakeQuantizationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 273 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsFloat16 | ( | const WorkloadInfo & | info | ) |
Definition at line 59 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateDebug().
bool IsFloorSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 282 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC, TensorInfo::GetDataType(), and TensorInfo::GetShape().
bool IsFullyConnectedSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const TensorInfo & | weights, | ||
const TensorInfo & | biases, | ||
const FullyConnectedDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 296 of file LayerSupport.cpp.
References ARMNN_DEPRECATED_MSG, and FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsGatherSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Definition at line 309 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
Referenced by IsGatherSupported().
bool armnn::IsGatherSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const GatherDescriptor & | descriptor, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Definition at line 320 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC, and IsGatherSupported().
bool IsGreaterSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 331 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC, and Greater.
bool IsInputSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 346 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
Referenced by BOOST_AUTO_TEST_CASE().
bool IsL2NormalizationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const L2NormalizationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 355 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsLstmSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | scratchBuffer, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const LstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 365 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsMaximumSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnSupported = nullptr , |
||
size_t | reasonIfUnSupportedMaxLength = 0 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 378 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsMeanSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const MeanDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 388 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsMemCopySupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 398 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsMemImportSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Definition at line 407 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsMergerSupported | ( | const BackendId & | backend, |
const std::vector< const TensorInfo *> | inputs, | ||
const TensorInfo & | output, | ||
const OriginsDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by IsMergerSupported().
bool armnn::IsMergerSupported | ( | const BackendId & | backend, |
std::vector< const TensorInfo *> | inputs, | ||
const TensorInfo & | output, | ||
const OriginsDescriptor & | descriptor, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Definition at line 427 of file LayerSupport.cpp.
References ARMNN_ASSERT, ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, FORWARD_LAYER_SUPPORT_FUNC, and IsMergerSupported().
bool IsMergeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 416 of file LayerSupport.cpp.
References ARMNN_DEPRECATED_MSG, and FORWARD_LAYER_SUPPORT_FUNC.
bool IsMinimumSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 441 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsMultiplicationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 451 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsNormalizationSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const NormalizationDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 461 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
constexpr bool armnn::IsOperationQueueDescriptor | ( | const QueueDescriptorType & | ) |
Definition at line 18 of file RefWorkloadFactory.hpp.
constexpr bool armnn::IsOperationQueueDescriptor | ( | const MemCopyQueueDescriptor & | ) |
Definition at line 21 of file RefWorkloadFactory.hpp.
constexpr bool armnn::IsOperationQueueDescriptor | ( | const ConstantQueueDescriptor & | ) |
Definition at line 24 of file RefWorkloadFactory.hpp.
constexpr bool armnn::IsOperationQueueDescriptor | ( | const PermuteQueueDescriptor & | ) |
Definition at line 27 of file RefWorkloadFactory.hpp.
bool IsOutputSupported | ( | const BackendId & | backend, |
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 471 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
Referenced by BOOST_AUTO_TEST_CASE().
bool IsPadSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const PadDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 479 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsPermuteSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const PermuteDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 532 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsPooling2dSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const Pooling2dDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 542 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsPreCompiledSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
bool IsPreluSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | alpha, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 552 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsQAsymmS8 | ( | const WorkloadInfo & | info | ) |
Definition at line 74 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateDebug(), RefWorkloadFactory::CreatePermute(), and RefWorkloadFactory::CreateTranspose().
bool armnn::IsQAsymmU8 | ( | const WorkloadInfo & | info | ) |
Definition at line 79 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateDebug().
bool armnn::IsQLstmSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | previousOutputIn, | ||
const TensorInfo & | previousCellStateIn, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const QLstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Definition at line 499 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsQSymmS16 | ( | const WorkloadInfo & | info | ) |
Definition at line 64 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateDebug(), RefWorkloadFactory::CreatePermute(), and RefWorkloadFactory::CreateTranspose().
bool armnn::IsQSymmS8 | ( | const WorkloadInfo & | info | ) |
Definition at line 69 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateDebug().
constexpr bool armnn::IsQuantized8BitType | ( | DataType | dataType | ) |
Definition at line 254 of file TypesUtils.hpp.
References ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, QAsymmS8, QAsymmU8, QSymmS8, and QuantizedSymm8PerAxis.
Referenced by GetBiasDataType(), RefLayerSupport::IsConvolution2dSupported(), RefLayerSupport::IsDepthwiseConvolutionSupported(), IsQuantizedType(), and RefLayerSupport::IsTransposeConvolution2dSupported().
bool IsQuantizedLstmSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | previousCellStateIn, | ||
const TensorInfo & | previousOutputIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const QuantizedLstmInputParamsInfo & | paramsInfo, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 516 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
constexpr bool armnn::IsQuantizedType | ( | ) |
Definition at line 249 of file TypesUtils.hpp.
Referenced by ClMultiplicationWorkload::ClMultiplicationWorkload(), RefWorkloadFactory::CreateFloor(), TensorInfo::IsQuantized(), NeonMultiplicationWorkload::NeonMultiplicationWorkload(), QuantizeQueueDescriptor::Validate(), and DequantizeQueueDescriptor::Validate().
constexpr bool armnn::IsQuantizedType | ( | DataType | dataType | ) |
Definition at line 264 of file TypesUtils.hpp.
References IsQuantized8BitType(), and QSymmS16.
bool armnn::IsQuantizeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Definition at line 490 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsReadyForSplitAssignment | ( | LayerSelectionInfo::LayerInfoContainer & | layerInfos, |
LayerSelectionInfo & | layerInfo | ||
) |
Definition at line 370 of file SubgraphViewSelector.cpp.
References ForEachLayerInput().
Referenced by SubgraphViewSelector::SelectSubgraphs().
bool IsReduceSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const ReduceDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 562 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsReshapeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const ReshapeDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Referenced by IsReshapeSupported().
bool armnn::IsReshapeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const ReshapeDescriptor & | descriptor, | ||
char * | reasonIfUnsupported, | ||
size_t | reasonIfUnsupportedMaxLength | ||
) |
Definition at line 572 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC, and IsReshapeSupported().
bool IsResizeBilinearSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 593 of file LayerSupport.cpp.
References Bilinear, FORWARD_LAYER_SUPPORT_FUNC, IsResizeSupported(), ResizeDescriptor::m_Method, ResizeDescriptor::m_TargetHeight, and ResizeDescriptor::m_TargetWidth.
bool IsResizeSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const ResizeDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 582 of file LayerSupport.cpp.
References ARMNN_DEPRECATED_MSG, and FORWARD_LAYER_SUPPORT_FUNC.
Referenced by IsResizeBilinearSupported().
bool IsRsqrtSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 609 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC, and Rsqrt.
bool armnn::IsSigned32 | ( | const WorkloadInfo & | info | ) |
Definition at line 49 of file RefWorkloadFactory.cpp.
References info.
Referenced by RefWorkloadFactory::CreateDebug().
bool IsSoftmaxSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const SoftmaxDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 622 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsSpaceToBatchNdSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const SpaceToBatchNdDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 632 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsSpaceToDepthSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const SpaceToDepthDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 642 of file LayerSupport.cpp.
References ARMNN_DEPRECATED_MSG, and FORWARD_LAYER_SUPPORT_FUNC.
bool IsSplitterSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const ViewsDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Definition at line 653 of file LayerSupport.cpp.
References ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, and FORWARD_LAYER_SUPPORT_FUNC.
Referenced by IsSplitterSupported().
bool IsSplitterSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const std::vector< std::reference_wrapper< TensorInfo >> & | outputs, | ||
const ViewsDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 664 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC, and IsSplitterSupported().
bool armnn::IsStackSupported | ( | const BackendId & | backend, |
const std::vector< const TensorInfo *> | inputs, | ||
const TensorInfo & | output, | ||
const StackDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
bool IsStridedSliceSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const StridedSliceDescriptor & | descriptor, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 674 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool IsSubtractionSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 684 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsSupportedForDataTypeGeneric | ( | Optional< std::string &> | reasonIfUnsupported, |
DataType | dataType, | ||
Float16Func | float16FuncPtr, | ||
Float32Func | float32FuncPtr, | ||
Uint8Func | uint8FuncPtr, | ||
Int32Func | int32FuncPtr, | ||
BooleanFunc | booleanFuncPtr, | ||
Params &&... | params | ||
) |
Definition at line 27 of file LayerSupportCommon.hpp.
References Boolean, Float16, Float32, QAsymmU8, and Signed32.
Referenced by RefLayerSupport::IsConvertFp16ToFp32Supported(), RefLayerSupport::IsConvertFp32ToFp16Supported(), and NeonLayerSupport::IsFloorSupported().
bool IsSwitchSupported | ( | const BackendId & | backend, |
const TensorInfo & | input0, | ||
const TensorInfo & | input1, | ||
const TensorInfo & | output0, | ||
const TensorInfo & | output1, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
Definition at line 694 of file LayerSupport.cpp.
References FORWARD_LAYER_SUPPORT_FUNC.
bool armnn::IsTransposeConvolution2dSupported | ( | const BackendId & | backend, |
const TensorInfo & | input, | ||
const TensorInfo & | output, | ||
const TransposeConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
char * | reasonIfUnsupported = nullptr , |
||
size_t | reasonIfUnsupportedMaxLength = 1024 |
||
) |
Deprecated in favor of IBackend and ILayerSupport interfaces.
constexpr LayerType armnn::LayerEnumOf | ( | const T * | = nullptr | ) |
constexpr LayerType armnn::LayerEnumOf | ( | const ActivationLayer * | ) |
Definition at line 103 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const AdditionLayer * | ) |
Definition at line 104 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ArgMinMaxLayer * | ) |
Definition at line 105 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const BatchNormalizationLayer * | ) |
Definition at line 106 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const BatchToSpaceNdLayer * | ) |
Definition at line 107 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ComparisonLayer * | ) |
Definition at line 108 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConcatLayer * | ) |
Definition at line 109 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConstantLayer * | ) |
Definition at line 110 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConvertBf16ToFp32Layer * | ) |
Definition at line 111 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConvertFp16ToFp32Layer * | ) |
Definition at line 112 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConvertFp32ToBf16Layer * | ) |
Definition at line 113 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ConvertFp32ToFp16Layer * | ) |
Definition at line 114 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const Convolution2dLayer * | ) |
Definition at line 115 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DebugLayer * | ) |
Definition at line 116 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DepthToSpaceLayer * | ) |
Definition at line 117 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DepthwiseConvolution2dLayer * | ) |
Definition at line 118 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DequantizeLayer * | ) |
Definition at line 119 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DetectionPostProcessLayer * | ) |
Definition at line 120 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const DivisionLayer * | ) |
Definition at line 121 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ElementwiseUnaryLayer * | ) |
Definition at line 122 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const FakeQuantizationLayer * | ) |
Definition at line 123 of file LayersFwd.hpp.
Definition at line 124 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const FloorLayer * | ) |
Definition at line 125 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const FullyConnectedLayer * | ) |
Definition at line 126 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const GatherLayer * | ) |
Definition at line 127 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const InputLayer * | ) |
Definition at line 128 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const InstanceNormalizationLayer * | ) |
Definition at line 129 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const L2NormalizationLayer * | ) |
Definition at line 130 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const LogicalBinaryLayer * | ) |
Definition at line 131 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const LogSoftmaxLayer * | ) |
Definition at line 132 of file LayersFwd.hpp.
Definition at line 133 of file LayersFwd.hpp.
Definition at line 134 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MaximumLayer * | ) |
Definition at line 135 of file LayersFwd.hpp.
Definition at line 136 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MemCopyLayer * | ) |
Definition at line 137 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MemImportLayer * | ) |
Definition at line 138 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MergeLayer * | ) |
Definition at line 139 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MinimumLayer * | ) |
Definition at line 140 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const MultiplicationLayer * | ) |
Definition at line 141 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const NormalizationLayer * | ) |
Definition at line 142 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const OutputLayer * | ) |
Definition at line 143 of file LayersFwd.hpp.
Definition at line 144 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const PermuteLayer * | ) |
Definition at line 145 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const Pooling2dLayer * | ) |
Definition at line 146 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const PreCompiledLayer * | ) |
Definition at line 147 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const PreluLayer * | ) |
Definition at line 148 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const QuantizeLayer * | ) |
Definition at line 149 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const QLstmLayer * | ) |
Definition at line 150 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const QuantizedLstmLayer * | ) |
Definition at line 151 of file LayersFwd.hpp.
Definition at line 152 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ReduceLayer * | ) |
Definition at line 153 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ReshapeLayer * | ) |
Definition at line 154 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const ResizeLayer * | ) |
Definition at line 155 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SliceLayer * | ) |
Definition at line 156 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SoftmaxLayer * | ) |
Definition at line 157 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SpaceToBatchNdLayer * | ) |
Definition at line 158 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SpaceToDepthLayer * | ) |
Definition at line 159 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SplitterLayer * | ) |
Definition at line 160 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const StackLayer * | ) |
Definition at line 161 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const StandInLayer * | ) |
Definition at line 162 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const StridedSliceLayer * | ) |
Definition at line 163 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SubtractionLayer * | ) |
Definition at line 164 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const SwitchLayer * | ) |
Definition at line 165 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const TransposeLayer * | ) |
Definition at line 166 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const TransposeConvolution2dLayer * | ) |
Definition at line 167 of file LayersFwd.hpp.
constexpr LayerType armnn::LayerEnumOf | ( | const UnmapLayer * | ) |
Definition at line 168 of file LayersFwd.hpp.
|
inline |
Definition at line 15 of file Logging.hpp.
References Debug, Error, Fatal, Info, Trace, and Warning.
Referenced by ScopedRecord::ScopedRecord().
void LogSoftmax | ( | Decoder< float > & | input, |
Encoder< float > & | output, | ||
const TensorInfo & | inputInfo, | ||
const LogSoftmaxDescriptor & | descriptor | ||
) |
Definition at line 29 of file LogSoftmax.cpp.
References ARMNN_ASSERT_MSG, Decoder< IType >::Get(), TensorShape::GetNumDimensions(), TensorInfo::GetNumDimensions(), armnnUtils::GetNumElementsBetween(), TensorInfo::GetShape(), IgnoreUnused(), SoftmaxDescriptor::m_Axis, SoftmaxDescriptor::m_Beta, numeric_cast(), and Encoder< IType >::Set().
std::string armnn::LowerString | ( | std::string | value | ) |
Definition at line 62 of file ClBackendContext.cpp.
|
inline |
Definition at line 70 of file Decoders.hpp.
References ARMNN_ASSERT_MSG, ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, BFloat16, Boolean, Float16, Float32, TensorInfo::GetDataType(), armnnUtils::GetPerAxisParams(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::HasPerAxisQuantization(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, QuantizedSymm8PerAxis, and Signed32.
|
inline |
Definition at line 70 of file Decoders.hpp.
References ARMNN_ASSERT_MSG, ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, BFloat16, Boolean, Float16, Float32, TensorInfo::GetDataType(), armnnUtils::GetPerAxisParams(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::HasPerAxisQuantization(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, QuantizedSymm8PerAxis, and Signed32.
|
inline |
Definition at line 153 of file Decoders.hpp.
References ARMNN_ASSERT_MSG, Boolean, and TensorInfo::GetDataType().
|
inline |
Definition at line 171 of file Decoders.hpp.
References ARMNN_ASSERT_MSG, TensorInfo::GetDataType(), and Signed32.
|
inline |
Definition at line 21 of file Encoders.hpp.
References ARMNN_ASSERT_MSG, ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, BFloat16, Boolean, Float16, Float32, TensorInfo::GetDataType(), armnnUtils::GetPerAxisParams(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::HasPerAxisQuantization(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, QuantizedSymm8PerAxis, and Signed32.
|
inline |
Definition at line 21 of file Encoders.hpp.
References ARMNN_ASSERT_MSG, ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, BFloat16, Float16, Float32, TensorInfo::GetDataType(), armnnUtils::GetPerAxisParams(), TensorInfo::GetQuantizationOffset(), TensorInfo::GetQuantizationScale(), TensorInfo::HasPerAxisQuantization(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, QuantizedSymm8PerAxis, and Signed32.
|
inline |
Definition at line 100 of file Encoders.hpp.
References ARMNN_ASSERT_MSG, Boolean, and TensorInfo::GetDataType().
|
inline |
Definition at line 118 of file Encoders.hpp.
References ARMNN_ASSERT_MSG, TensorInfo::GetDataType(), and Signed32.
arm_compute::DetectionPostProcessLayerInfo armnn::MakeInfo | ( | const DetectionPostProcessDescriptor & | desc | ) |
Definition at line 17 of file NeonDetectionPostProcessWorkload.cpp.
References DetectionPostProcessDescriptor::m_DetectionsPerClass, DetectionPostProcessDescriptor::m_MaxClassesPerDetection, DetectionPostProcessDescriptor::m_MaxDetections, DetectionPostProcessDescriptor::m_NmsIouThreshold, DetectionPostProcessDescriptor::m_NmsScoreThreshold, DetectionPostProcessDescriptor::m_NumClasses, and DetectionPostProcessDescriptor::m_UseRegularNms.
Referenced by NeonDetectionPostProcessValidate().
Optimizer::Optimizations armnn::MakeOptimizations | ( | Args &&... | args | ) |
Definition at line 43 of file Optimizer.hpp.
References Append().
Referenced by AddBroadcastReshapeLayerOptimizerTest(), BOOST_AUTO_TEST_CASE(), and Optimize().
Optional<T> armnn::MakeOptional | ( | Args &&... | args | ) |
Utility template that constructs an object of type T in-place and wraps it inside an Optional<T> object.
Definition at line 305 of file Optional.hpp.
References CONSTRUCT_IN_PLACE.
constexpr TransformIterator<Function, Iterator> armnn::MakeTransformIterator | ( | Iterator | i, |
Function | f | ||
) |
Definition at line 77 of file TransformIterator.hpp.
constexpr const char* armnn::MockBackendId | ( | ) |
Definition at line 11 of file MockBackendId.hpp.
Referenced by BOOST_AUTO_TEST_CASE(), MockBackend::GetIdStatic(), and MockBackend::OptimizeSubgraphView().
constexpr const char* armnn::MockImportBackendId | ( | ) |
Definition at line 12 of file MockImportBackend.hpp.
Referenced by BOOST_AUTO_TEST_CASE(), and MockImportBackend::GetIdStatic().
arm_compute::Status NeonAbsWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonAbsWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonActivationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ActivationDescriptor & | descriptor | ||
) |
Definition at line 17 of file NeonActivationWorkload.cpp.
Referenced by NeonLayerSupport::IsActivationSupported().
arm_compute::Status NeonAdditionWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 20 of file NeonAdditionWorkload.cpp.
Referenced by NeonLayerSupport::IsAdditionSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonArgMinMaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ArgMinMaxDescriptor & | descriptor | ||
) |
Definition at line 31 of file NeonArgMinMaxWorkload.cpp.
Referenced by NeonLayerSupport::IsArgMinMaxSupported().
constexpr const char* armnn::NeonBackendId | ( | ) |
Definition at line 10 of file NeonBackendId.hpp.
Referenced by NeonBackend::GetIdStatic().
arm_compute::Status NeonBatchNormalizationValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TensorInfo & | mean, | ||
const TensorInfo & | var, | ||
const TensorInfo & | beta, | ||
const TensorInfo & | gamma, | ||
const BatchNormalizationDescriptor & | descriptor, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 24 of file NeonBatchNormalizationWorkload.cpp.
Referenced by NeonLayerSupport::IsBatchNormalizationSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonBatchToSpaceNdWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const BatchToSpaceNdDescriptor & | desc | ||
) |
Definition at line 20 of file NeonBatchToSpaceNdWorkload.cpp.
Referenced by NeonLayerSupport::IsBatchToSpaceNdSupported().
arm_compute::Status NeonComparisonWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ComparisonDescriptor & | descriptor | ||
) |
Definition at line 16 of file NeonComparisonWorkload.cpp.
Referenced by NeonLayerSupport::IsComparisonSupported().
arm_compute::Status NeonConcatWorkloadValidate | ( | const std::vector< const TensorInfo *> & | inputs, |
const TensorInfo & | output, | ||
const OriginsDescriptor & | descriptor | ||
) |
Definition at line 27 of file NeonConcatWorkload.cpp.
Referenced by NeonLayerSupport::IsConcatSupported().
arm_compute::Status NeonConstantWorkloadValidate | ( | const TensorInfo & | output | ) |
Definition at line 20 of file NeonConstantWorkload.cpp.
Referenced by NeonLayerSupport::IsConstantSupported().
arm_compute::Status NeonConvolution2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Convolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
bool | isFastMathEnabled, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 24 of file NeonConvolution2dWorkload.cpp.
Referenced by NeonLayerSupport::IsConvolution2dSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonDepthToSpaceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const DepthToSpaceDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonDepthToSpaceWorkload.cpp.
References SpaceToDepthDescriptor::m_DataLayout.
Referenced by NeonLayerSupport::IsDepthToSpaceSupported().
arm_compute::Status NeonDepthwiseConvolutionWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const DepthwiseConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 29 of file NeonDepthwiseConvolutionWorkload.cpp.
Referenced by NeonLayerSupport::IsDepthwiseConvolutionSupported(), NeonLayerSupport::IsDilatedDepthwiseConvolutionSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonDequantizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 22 of file NeonDequantizeWorkload.cpp.
Referenced by NeonLayerSupport::IsDequantizeSupported().
bool NeonDetected | ( | ) |
arm_compute::Status NeonDetectionPostProcessValidate | ( | const TensorInfo & | boxEncodings, |
const TensorInfo & | scores, | ||
const TensorInfo & | anchors, | ||
const TensorInfo & | detectionBoxes, | ||
const TensorInfo & | detectionClasses, | ||
const TensorInfo & | detectionScores, | ||
const TensorInfo & | numDetections, | ||
const DetectionPostProcessDescriptor & | desc | ||
) |
Definition at line 32 of file NeonDetectionPostProcessWorkload.cpp.
References info, and MakeInfo().
arm_compute::Status NeonDivisionWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 18 of file NeonDivisionWorkload.cpp.
Referenced by NeonLayerSupport::IsDivisionSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonExpWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonExpWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonFullyConnectedWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TensorInfo & | weights, | ||
const TensorInfo & | biases, | ||
const FullyConnectedDescriptor & | descriptor, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 23 of file NeonFullyConnectedWorkload.cpp.
Referenced by NeonLayerSupport::IsFullyConnectedSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonGatherWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | indices, | ||
const TensorInfo & | output, | ||
const GatherDescriptor & | descriptor | ||
) |
Definition at line 13 of file NeonGatherWorkload.cpp.
Referenced by NeonLayerSupport::IsGatherSupported().
arm_compute::Status NeonInstanceNormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const InstanceNormalizationDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonInstanceNormalizationWorkload.cpp.
Referenced by NeonLayerSupport::IsInstanceNormalizationSupported().
arm_compute::Status NeonL2NormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const L2NormalizationDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonL2NormalizationFloatWorkload.cpp.
Referenced by NeonLayerSupport::IsL2NormalizationSupported().
arm_compute::Status NeonLogicalAndWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 18 of file NeonLogicalAndWorkload.cpp.
Referenced by NeonLayerSupport::IsLogicalBinarySupported().
arm_compute::Status NeonLogicalNotWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 19 of file NeonLogicalNotWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonLogicalOrWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 18 of file NeonLogicalOrWorkload.cpp.
Referenced by NeonLayerSupport::IsLogicalBinarySupported().
arm_compute::Status NeonLogSoftmaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const LogSoftmaxDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonLogSoftmaxWorkload.cpp.
Referenced by NeonLayerSupport::IsLogSoftmaxSupported().
arm_compute::Status NeonLstmFloatWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | scratchBuffer, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | output, | ||
const LstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 273 of file NeonLstmFloatWorkload.cpp.
Referenced by NeonLayerSupport::IsLstmSupported().
arm_compute::Status NeonMaximumWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Definition at line 14 of file NeonMaximumWorkload.cpp.
Referenced by NeonLayerSupport::IsMaximumSupported().
arm_compute::Status NeonMeanWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const MeanDescriptor & | desc | ||
) |
Definition at line 18 of file NeonMeanWorkload.cpp.
Referenced by NeonLayerSupport::IsMeanSupported().
arm_compute::Status NeonMinimumWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output | ||
) |
Validate function for validating the inputs and output.
[in] | input0 | The input0 value to be validated. |
[in] | input1 | The input1 value to be validated. |
[in] | output | The output value to be validated. |
Definition at line 15 of file NeonMinimumWorkload.cpp.
Referenced by NeonLayerSupport::IsMinimumSupported().
arm_compute::Status NeonMultiplicationWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 19 of file NeonMultiplicationWorkload.cpp.
Referenced by NeonLayerSupport::IsMultiplicationSupported(), and NeonBackend::OptimizeSubgraphView().
arm_compute::Status NeonNegWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonNegWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonNormalizationWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const NormalizationDescriptor & | descriptor | ||
) |
Definition at line 48 of file NeonNormalizationFloatWorkload.cpp.
Referenced by NeonLayerSupport::IsNormalizationSupported().
arm_compute::Status NeonPadWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const PadDescriptor & | descriptor | ||
) |
Definition at line 48 of file NeonPadWorkload.cpp.
Referenced by NeonLayerSupport::IsPadSupported().
arm_compute::Status NeonPermuteWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const PermuteDescriptor & | descriptor | ||
) |
Definition at line 15 of file NeonPermuteWorkload.cpp.
Referenced by NeonLayerSupport::IsPermuteSupported().
arm_compute::Status NeonPooling2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const Pooling2dDescriptor & | descriptor | ||
) |
Definition at line 22 of file NeonPooling2dWorkload.cpp.
Referenced by NeonLayerSupport::IsPooling2dSupported().
arm_compute::Status NeonPreluWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | alpha, | ||
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonPreluWorkload.cpp.
Referenced by NeonLayerSupport::IsPreluSupported().
arm_compute::Status NeonQLstmWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | outputStateOut, | ||
const TensorInfo & | output, | ||
const QLstmDescriptor & | descriptor, | ||
const LstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 236 of file NeonQLstmWorkload.cpp.
Referenced by NeonLayerSupport::IsQLstmSupported().
arm_compute::Status NeonQuantizedLstmWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | cellStateIn, | ||
const TensorInfo & | outputStateIn, | ||
const TensorInfo & | cellStateOut, | ||
const TensorInfo & | outputStateOut, | ||
const QuantizedLstmInputParamsInfo & | paramsInfo | ||
) |
Definition at line 130 of file NeonQuantizedLstmWorkload.cpp.
Referenced by NeonLayerSupport::IsQuantizedLstmSupported().
arm_compute::Status NeonQuantizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file NeonQuantizeWorkload.cpp.
Referenced by NeonLayerSupport::IsQuantizeSupported().
arm_compute::Status NeonReduceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ReduceDescriptor & | desc | ||
) |
Definition at line 19 of file NeonReduceWorkload.cpp.
Referenced by NeonLayerSupport::IsReduceSupported().
arm_compute::Status NeonReshapeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 17 of file NeonReshapeWorkload.cpp.
Referenced by NeonLayerSupport::IsReshapeSupported().
arm_compute::Status NeonResizeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const ResizeDescriptor & | descriptor | ||
) |
Definition at line 22 of file NeonResizeWorkload.cpp.
Referenced by NeonLayerSupport::IsResizeSupported().
arm_compute::Status NeonRsqrtWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output | ||
) |
Definition at line 18 of file NeonRsqrtWorkload.cpp.
Referenced by NeonLayerSupport::IsElementwiseUnarySupported().
arm_compute::Status NeonSliceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SliceDescriptor & | descriptor | ||
) |
Definition at line 21 of file NeonSliceWorkload.cpp.
Referenced by NeonLayerSupport::IsSliceSupported().
arm_compute::Status NeonSoftmaxWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SoftmaxDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonSoftmaxWorkload.cpp.
Referenced by NeonLayerSupport::IsSoftmaxSupported().
arm_compute::Status NeonSpaceToBatchNdWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SpaceToBatchNdDescriptor & | descriptor | ||
) |
Definition at line 20 of file NeonSpaceToBatchNdWorkload.cpp.
Referenced by NeonLayerSupport::IsSpaceToBatchNdSupported().
arm_compute::Status NeonSpaceToDepthWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const SpaceToDepthDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonSpaceToDepthWorkload.cpp.
References SpaceToDepthDescriptor::m_DataLayout.
Referenced by NeonLayerSupport::IsSpaceToDepthSupported().
arm_compute::Status NeonSplitterWorkloadValidate | ( | const TensorInfo & | input, |
const std::vector< std::reference_wrapper< TensorInfo >> & | outputs, | ||
unsigned int | splitAxis | ||
) |
Definition at line 32 of file NeonSplitterWorkload.cpp.
Referenced by NeonLayerSupport::IsSplitterSupported().
arm_compute::Status NeonStackWorkloadValidate | ( | const std::vector< const TensorInfo *> & | inputs, |
const TensorInfo & | output, | ||
const StackDescriptor & | descriptor | ||
) |
Definition at line 27 of file NeonStackWorkload.cpp.
Referenced by NeonLayerSupport::IsStackSupported().
arm_compute::Status NeonStridedSliceWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const StridedSliceDescriptor & | descriptor | ||
) |
Definition at line 19 of file NeonStridedSliceWorkload.cpp.
Referenced by NeonLayerSupport::IsStridedSliceSupported().
arm_compute::Status NeonSubtractionWorkloadValidate | ( | const TensorInfo & | input0, |
const TensorInfo & | input1, | ||
const TensorInfo & | output, | ||
const ActivationDescriptor * | activationDescriptor | ||
) |
Definition at line 22 of file NeonSubtractionWorkload.cpp.
Referenced by NeonLayerSupport::IsSubtractionSupported(), and NeonBackend::OptimizeSubgraphView().
constexpr const char* armnn::NeonTensorHandleFactoryId | ( | ) |
Definition at line 14 of file NeonTensorHandleFactory.hpp.
Referenced by NeonTensorHandleFactory::GetIdStatic().
arm_compute::Status NeonTransposeConvolution2dWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TransposeConvolution2dDescriptor & | descriptor, | ||
const TensorInfo & | weights, | ||
const Optional< TensorInfo > & | biases | ||
) |
Definition at line 25 of file NeonTransposeConvolution2dWorkload.cpp.
Referenced by NeonLayerSupport::IsTransposeConvolution2dSupported().
arm_compute::Status NeonTransposeWorkloadValidate | ( | const TensorInfo & | input, |
const TensorInfo & | output, | ||
const TransposeDescriptor & | descriptor | ||
) |
Definition at line 15 of file NeonTransposeWorkload.cpp.
Referenced by NeonLayerSupport::IsTransposeSupported().
bool armnn::NextIndex | ( | const unsigned int | numDims, |
const armnn::TensorShape & | dims, | ||
std::vector< unsigned int > & | current | ||
) |
std::vector< unsigned int > NonMaxSuppression | ( | unsigned int | numBoxes, |
const std::vector< float > & | boxCorners, | ||
const std::vector< float > & | scores, | ||
float | nmsScoreThreshold, | ||
unsigned int | maxDetection, | ||
float | nmsIouThreshold | ||
) |
Definition at line 49 of file DetectionPostProcess.cpp.
References GenerateRangeK(), IntersectionOverUnion(), numeric_cast(), and TopKSort().
Referenced by BOOST_AUTO_TEST_CASE(), and DetectionPostProcess().
std::enable_if_t< std::is_unsigned<Source>::value && std::is_unsigned<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 35 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
Referenced by ICaffeParser::CaffeParserImpl::AddConvLayerWithDepthwiseConv(), ICaffeParser::CaffeParserImpl::AddConvLayerWithSplits(), ICaffeParser::CaffeParserImpl::AddDeconvLayerWithSplits(), AllocateOutputData(), ArgMinMax(), BOOST_AUTO_TEST_CASE(), ClArgMinMaxWorkload::ClArgMinMaxWorkload(), ClSpaceToBatchNdWorkload::ClSpaceToBatchNdWorkload(), ClStridedSliceWorkload::ClStridedSliceWorkload(), CompareActivationTestImpl(), armnnTfLiteParser::ComputeWrappedIndex(), OutputSlot::Connect(), CreateNetworkImpl< IParser >::Create(), SendCounterPacket::CreateCategoryRecord(), SendCounterPacket::CreateEventRecord(), TfLiteParserImpl::CreateNetworkFromBinary(), RecordByRecordCaffeParser::CreateNetworkFromBinaryFile(), DepthwiseConvolution2dAsymmetricTestImpl(), DepthwiseConvolution2dTestImpl(), DetectionPostProcess(), RefL2NormalizationWorkload::Execute(), armnnUtils::ExpandDims(), FakeQuantization(), Gather(), CounterDirectory::GetCategoryCount(), MockCounterDirectory::GetCategoryCount(), CounterDirectory::GetCounterCount(), MockCounterDirectory::GetCounterCount(), CounterDirectory::GetCounterSetCount(), MockCounterDirectory::GetCounterSetCount(), CounterDirectory::GetDeviceCount(), MockCounterDirectory::GetDeviceCount(), IDeserializer::DeserializerImpl::GetNetworkOutputBindingInfo(), ICaffeParser::GetNetworkOutputBindingInfo(), ITfParser::GetNetworkOutputBindingInfo(), OutputSlot::GetNumConnections(), SubgraphView::GetNumInputSlots(), SubgraphView::GetNumOutputSlots(), StridedSliceDescriptor::GetStartForAxis(), StridedSliceDescriptor::GetStopForAxis(), GetStreamMetaDataPacketSize(), Cifar10Database::GetTestCaseData(), CaffePreprocessor::GetTestCaseData(), YoloDatabase::GetTestCaseData(), armnnUtils::GetUnsignedAxis(), RequestCountersPacketHandler::HandlePacket(), InferenceTestImage::InferenceTestImage(), PreluLayer::InferOutputShapes(), RefLayerSupport::IsMeanSupported(), ICaffeParser::CaffeParserImpl::LoadNetParam(), ITfParser::TfParserImpl::LoadNodeDef(), LogSoftmax(), NeonArgMinMaxWorkload::NeonArgMinMaxWorkload(), NeonSpaceToBatchNdWorkload::NeonSpaceToBatchNdWorkload(), NeonStridedSliceWorkload::NeonStridedSliceWorkload(), NonMaxSuppression(), ClassifierTestCaseProvider< TDatabase, InferenceModel >::OnInferenceTestFinished(), armnnTfParser::OutputShapeOfExpandDims(), IDeserializer::DeserializerImpl::OutputShapeOfReshape(), TfLiteParserImpl::OutputShapeOfReshape(), ParseArray(), ParseDataArray< armnn::DataType::QAsymmU8 >(), ICaffeParser::CaffeParserImpl::ParseInputLayer(), ICaffeParser::CaffeParserImpl::ParseLRNLayer(), ITfParser::TfParserImpl::ParsePlaceholder(), Pooling2d(), ClassifierTestCase< TTestCaseDatabase, TModel >::ProcessResult(), QuantizerStrategy::QuantizerStrategy(), Reduce(), InferenceModel< IParser, TDataType >::Run(), ClContextSerializer::SaveSerializedToStream(), ISerializer::SerializerImpl::SaveSerializedToStream(), SendCounterPacket::SendPeriodicCounterCapturePacket(), SendCounterPacket::SendPeriodicCounterSelectionPacket(), SendCounterPacket::SendStreamMetaDataPacket(), SimpleConvolution2dNhwcTestImpl(), SimpleConvolution2dTestImpl(), InferenceTestImage::StbResize(), StridedSlice(), Graph::SubstituteSubgraph(), MeanQueueDescriptor::Validate(), ReduceLayer::ValidateTensorShapesFromInputs(), MeanLayer::ValidateTensorShapesFromInputs(), VerifyTimelineLabelBinaryPacketData(), armnn::profiling::WriteTimelineLabelBinaryPacket(), and armnn::profiling::WriteTimelineMessageDirectoryPackage().
std::enable_if_t< std::is_signed<Source>::value && std::is_integral<Source>::value && std::is_signed<Dest>::value && std::is_integral<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 58 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_floating_point<Source>::value && std::is_floating_point<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 83 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_floating_point<Source>::value && std::is_signed<Dest>::value && std::is_integral<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 109 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_signed<Source>::value && std::is_integral<Source>::value && std::is_floating_point<Dest>::value, Dest> armnn::numeric_cast | ( | Source | source | ) |
Definition at line 135 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_signed<Dest>::value && std::is_integral<Dest>::value && std::is_unsigned<Source>::value, Dest> armnn::numeric_cast | ( | Source | sValue | ) |
Definition at line 165 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_floating_point<Dest>::value && std::is_unsigned<Source>::value, Dest> armnn::numeric_cast | ( | Source | sValue | ) |
Definition at line 184 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_unsigned<Dest>::value && std::is_signed<Source>::value && std::is_integral<Source>::value, Dest> armnn::numeric_cast | ( | Source | sValue | ) |
Definition at line 206 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
std::enable_if_t< std::is_unsigned<Dest>::value && std::is_floating_point<Source>::value, Dest> armnn::numeric_cast | ( | Source | sValue | ) |
Definition at line 230 of file NumericCast.hpp.
References ARMNN_NUMERIC_CAST_CHECK.
|
inline |
Definition at line 19 of file BatchToSpaceNd.cpp.
References DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDataLayout(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetWidthIndex(), and NHWC.
Referenced by BatchToSpaceNd().
|
inline |
Deprecated function that will be removed together with the Compute enum.
Definition at line 47 of file BackendId.hpp.
References GetComputeDeviceAsCString().
|
inline |
Deprecated function that will be removed together with the Compute enum.
Definition at line 58 of file BackendId.hpp.
References GetComputeDeviceAsCString().
|
inline |
Definition at line 61 of file IBackendInternal.hpp.
References BackendVersion::m_Major, and BackendVersion::m_Minor.
|
inline |
Deprecated function that will be removed together with the Compute enum.
Definition at line 69 of file BackendId.hpp.
References GetComputeDeviceAsCString().
|
inline |
Definition at line 119 of file BFloat16.hpp.
References BFloat16::ToFloat32(), and BFloat16::Val().
|
inline |
Definition at line 174 of file BackendId.hpp.
std::ostream& armnn::operator<< | ( | std::ostream & | os, |
const TContainer< BackendId, TContainerTemplateArgs... > & | ids | ||
) |
Definition at line 181 of file BackendId.hpp.
|
inline |
Definition at line 269 of file TypesUtils.hpp.
References GetStatusAsCString().
|
inline |
Definition at line 276 of file TypesUtils.hpp.
References Dequantize, TensorShape::GetNumDimensions(), and Quantize.
|
inline |
Definition at line 21 of file InferenceTest.hpp.
References ParseComputeDevice(), and Undefined.
|
inline |
Definition at line 34 of file InferenceTest.hpp.
References ParseComputeDevice(), and Undefined.
IOptimizedNetworkPtr Optimize | ( | const INetwork & | network, |
const std::vector< BackendId > & | backendPreferences, | ||
const IDeviceSpec & | deviceSpec, | ||
const OptimizerOptions & | options = OptimizerOptions() , |
||
Optional< std::vector< std::string > &> | messages = EmptyOptional() |
||
) |
Create an optimized version of the network.
network | INetwork description of the network to be optimized. |
backendPreferences | The choice of the backend ordered by user preferences. |
deviceSpec | DeviceSpec object as queried from the runtime. See IRuntime::GetDeviceSpec() |
messages | If there are failures or warnings a string describing same will be added to the vector |
options | OptimizerOptions object with optimizer configuration options |
Definition at line 1502 of file Network.cpp.
References Graph::AddCompatibilityLayers(), ApplyBackendOptimizations(), ARMNN_ASSERT, ARMNN_NO_DEPRECATE_WARN_BEGIN, ARMNN_NO_DEPRECATE_WARN_END, AssignBackends(), BackendRegistryInstance(), Graph::begin(), CreateSupportedBackends(), IOptimizedNetwork::Destroy(), Graph::end(), BackendSettings::GetAvailablePreferredBackends(), BackendRegistry::GetFactory(), Graph::InferTensorInfos(), IOptimizedNetwork::IOptimizedNetwork(), OptimizerOptions::m_Debug, OptimizationResult::m_Error, OptimizerOptions::m_ImportEnabled, OptimizerOptions::m_ModelOptions, OptimizerOptions::m_ReduceFp32ToBf16, OptimizerOptions::m_ReduceFp32ToFp16, BackendSettings::m_SelectedBackends, BackendSettings::m_SupportedBackends, MakeOptimizations(), Optimizer::Pass(), INetwork::pNetworkImpl, IOptimizedNetwork::pOptimizedNetworkImpl, ReportError(), and SelectTensorHandleStrategy().
Referenced by BOOST_AUTO_TEST_CASE(), BOOST_FIXTURE_TEST_CASE(), GetSoftmaxProfilerJson(), InferenceModel< IParser, TDataType >::InferenceModel(), main(), QLstmEndToEnd(), QuantizedLstmEndToEnd(), NetworkQuantizer::Refine(), ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::Setup(), ParserFlatbuffersSerializeFixture::Setup(), ParserFlatbuffersFixture::Setup(), ParserPrototxtFixture< armnnOnnxParser::IOnnxParser >::SetupOptimizedNetwork(), and VerifyPostOptimisationStructureTestImpl().
void Pad | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | outputInfo, | ||
const PadQueueDescriptor & | data | ||
) |
Definition at line 39 of file Pad.cpp.
References Decoder< IType >::Get(), TensorShape::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), QueueDescriptor::m_Inputs, QueueDescriptor::m_Outputs, PadDescriptor::m_PadList, PadDescriptor::m_PadValue, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and Encoder< IType >::Set().
bool armnn::ParseBoolean | ( | const BackendOptions::Var & | value, |
bool | defaultValue | ||
) |
Definition at line 97 of file ClBackendContext.cpp.
References BackendOptions::Var::AsBool(), and BackendOptions::Var::IsBool().
constexpr armnn::Compute armnn::ParseComputeDevice | ( | const char * | str | ) |
Deprecated function that will be removed together with the Compute enum.
Definition at line 160 of file TypesUtils.hpp.
References CpuAcc, CpuRef, GpuAcc, StrEqual(), and Undefined.
Referenced by operator>>().
std::string armnn::ParseFile | ( | const BackendOptions::Var & | value, |
std::string | defaultValue | ||
) |
Definition at line 106 of file ClBackendContext.cpp.
References BackendOptions::Var::AsString(), and BackendOptions::Var::IsString().
Referenced by ClBackendContext::ClBackendContext(), and ClBackendModelContext::ClBackendModelContext().
void armnn::ParseOptions | ( | const std::vector< BackendOptions > & | options, |
BackendId | backend, | ||
F | f | ||
) |
Definition at line 283 of file BackendOptions.hpp.
References BackendOptions::BackendOption::GetName(), and BackendOptions::BackendOption::GetValue().
Referenced by ClBackendContext::ClBackendContext(), ClBackendModelContext::ClBackendModelContext(), and NeonBackendModelContext::NeonBackendModelContext().
TuningLevel armnn::ParseTuningLevel | ( | const BackendOptions::Var & | value, |
TuningLevel | defaultValue | ||
) |
Definition at line 79 of file ClBackendContext.cpp.
References ARMNN_LOG, BackendOptions::Var::AsInt(), Exhaustive, BackendOptions::Var::IsInt(), None, and warning.
Referenced by ClBackendContext::ClBackendContext().
armnn::ConstTensor PermuteTensor | ( | const ConstCpuTensorHandle * | tensor, |
const PermutationVector & | permutationVector, | ||
void * | permuteBuffer | ||
) |
Definition at line 14 of file WorkloadUtils.cpp.
References ARMNN_ASSERT_MSG, ConstCpuTensorHandle::GetConstTensor(), TensorInfo::GetDataType(), GetDataTypeSize(), TensorInfo::GetNumBytes(), TensorInfo::GetShape(), PermutationVector::GetSize(), ConstCpuTensorHandle::GetTensorInfo(), Permute, and armnnUtils::Permuted().
Referenced by ConvertWeightTensorFromArmnnToAcl(), and GatherTensorHandlePairs().
DestType armnn::PolymorphicDowncast | ( | SourceType | value | ) |
Polymorphic downcast for build in pointers only.
Usage: Child* pChild = PolymorphicDowncast<Child*>(pBase);
DestType | Pointer type to the target object (Child pointer type) |
SourceType | Pointer type to the source object (Base pointer type) |
value | Pointer to the source object |
Definition at line 74 of file PolymorphicDowncast.hpp.
References ARMNN_POLYMORPHIC_CAST_CHECK.
auto armnn::PolymorphicPointerDowncast | ( | const SourceType & | value | ) |
Polymorphic downcast for shared pointers and build in pointers.
Usage: auto pChild = PolymorphicPointerDowncast<Child>(pBase)
DestType | Type of the target object (Child type) |
SourceType | Pointer type to the source object (Base (shared) pointer type) |
value | Pointer to the source object |
Definition at line 94 of file PolymorphicDowncast.hpp.
References ARMNN_POLYMORPHIC_CAST_CHECK.
void Pooling2d | ( | Decoder< float > & | rInputDecoder, |
Encoder< float > & | rOutputEncoder, | ||
const TensorInfo & | inputInfo, | ||
const TensorInfo & | outputInfo, | ||
const Pooling2dDescriptor & | params | ||
) |
Computes the Pooling2d operation.
Definition at line 142 of file Pooling2d.cpp.
References Decoder< IType >::DecodeTensor(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetDataLayout(), DataLayoutIndexed::GetHeightIndex(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), Pooling2dDescriptor::m_DataLayout, Pooling2dDescriptor::m_PadBottom, Pooling2dDescriptor::m_PaddingMethod, Pooling2dDescriptor::m_PadLeft, Pooling2dDescriptor::m_PadRight, Pooling2dDescriptor::m_PadTop, Pooling2dDescriptor::m_PoolHeight, Pooling2dDescriptor::m_PoolType, Pooling2dDescriptor::m_PoolWidth, Pooling2dDescriptor::m_StrideX, Pooling2dDescriptor::m_StrideY, NHWC, numeric_cast(), Pooling2d(), and Encoder< IType >::Set().
Referenced by Pooling2d(), and Pooling2dLayer::Pooling2dLayer().
void PreluImpl | ( | const PreluQueueDescriptor & | data, |
Decoder< float > & | inputData, | ||
Decoder< float > & | alphaData, | ||
Encoder< float > & | outputData | ||
) |
Definition at line 13 of file PreluImpl.cpp.
References TensorInfo::GetShape(), GetTensorInfo(), QueueDescriptor::m_Inputs, QueueDescriptor::m_Outputs, and BroadcastLoop::Unroll().
Referenced by RefPreluWorkload::Execute().
void armnn::PreserveTypeTestImpl | ( | const DataType & | dataType | ) |
Definition at line 2065 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetworkQuantizer::Create(), INetwork::Create(), Float16, Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, QAsymmU8, IOutputSlot::SetTensorInfo(), and VisitLayersTopologically().
Referenced by BOOST_AUTO_TEST_CASE().
|
inline |
Definition at line 108 of file RefWorkloadUtils.hpp.
References TensorInfo::GetNumElements(), TensorInfo::GetQuantizationOffset(), and TensorInfo::GetQuantizationScale().
template int32_t Quantize< int32_t > | ( | float | value, |
float | scale, | ||
int32_t | offset | ||
) |
Quantize a floating point data type into an 8-bit data type.
Explicit specialization of Quantize for int32_t.
Explicit specialization of Quantize for int16_t.
Explicit specialization of Quantize for uint8_t.
Explicit specialization of Quantize for int8_t.
value | - The value to quantize. |
scale | - The scale (must be non-zero). |
offset | - The offset. |
Definition at line 30 of file TypesUtils.cpp.
References ARMNN_ASSERT.
void armnn::QuantizeConstant | ( | const srcType * | src, |
uint8_t * | dst, | ||
size_t | numElements, | ||
float & | scale, | ||
int & | offset | ||
) |
Definition at line 23 of file NetworkQuantizerUtils.hpp.
References ARMNN_ASSERT, QAsymmU8QuantizationScheme::ComputeScheme(), and CreateQuantizedConst().
Referenced by CreateQuantizedConst().
void Reduce | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | outputInfo, | ||
Decoder< float > & | input, | ||
Encoder< float > & | output, | ||
const std::vector< uint32_t > | axis, | ||
const ReduceOperation | reduceOperation | ||
) |
Definition at line 71 of file Reduce.cpp.
References ARMNN_ASSERT, Decoder< IType >::Get(), TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), Max, Mean, Min, NextIndex(), numeric_cast(), ReducedOutputOffset(), Encoder< IType >::Set(), and Sum.
unsigned int armnn::ReducedOutputOffset | ( | const unsigned int | numDims, |
const armnn::TensorShape & | dims, | ||
std::vector< unsigned int > & | index, | ||
const unsigned int | numAxis, | ||
const std::vector< unsigned int > & | axis | ||
) |
constexpr const char* armnn::RefBackendId | ( | ) |
Definition at line 10 of file RefBackendId.hpp.
Referenced by RefBackend::GetIdStatic().
constexpr const char* armnn::RefTensorHandleFactoryId | ( | ) |
Definition at line 15 of file RefTensorHandleFactory.hpp.
Referenced by RefTensorHandleFactory::GetIdStatic().
ConstTensor armnn::ReorderWeightChannelsForAcl | ( | const ConstTensor & | weightHandle, |
DataLayout | dataLayout, | ||
void * | permuteBuffer | ||
) |
Definition at line 63 of file WorkloadUtils.cpp.
References BaseTensor< MemoryType >::GetInfo(), TensorInfo::GetNumBytes(), BaseTensor< MemoryType >::GetShape(), NCHW, and NHWC.
void armnn::ReportError | ( | const std::string & | errorMessage, |
Optional< std::vector< std::string > &> | errorMessages | ||
) |
Definition at line 563 of file Network.cpp.
References ARMNN_LOG, and warning.
Referenced by AssignBackends(), CheckScaleSetOnQuantizedType(), Optimize(), and ReturnWithError().
|
inline |
Definition at line 77 of file ArmComputeSubgraphUtils.hpp.
References OptimizationViews::AddUntouchedSubgraph(), CreateInputsFrom(), and CreateOutputsFrom().
Referenced by NeonBackend::OptimizeSubgraphView(), and ClBackend::OptimizeSubgraphView().
void armnn::ReportWarning | ( | const std::string & | warningMessage, |
Optional< std::vector< std::string > &> | warningMessages | ||
) |
Definition at line 575 of file Network.cpp.
References ARMNN_LOG, and warning.
Referenced by ApplyBackendOptimizations(), and AttemptBackendAssignment().
bool armnn::RequiresCopy | ( | ITensorHandleFactory::FactoryId | src, |
ITensorHandleFactory::FactoryId | dst, | ||
TensorHandleFactoryRegistry & | registry | ||
) |
Definition at line 1127 of file Network.cpp.
References ITensorHandleFactory::GetExportFlags(), TensorHandleFactoryRegistry::GetFactory(), and ITensorHandleFactory::GetImportFlags().
Referenced by CalculateSlotOption().
void ReshapeWeightsForAcl | ( | TensorInfo & | weightInfo, |
DataLayout | dataLayout | ||
) |
Definition at line 37 of file WorkloadUtils.cpp.
References TensorInfo::GetShape(), NCHW, NHWC, and TensorInfo::SetShape().
Referenced by ConvertWeightTensorFromArmnnToAcl(), ConvertWeightTensorInfoFromArmnnToAcl(), and GatherTensorHandlePairs().
void Resize | ( | Decoder< float > & | in, |
const TensorInfo & | inputInfo, | ||
Encoder< float > & | out, | ||
const TensorInfo & | outputInfo, | ||
DataLayoutIndexed | dataLayout, | ||
armnn::ResizeMethod | resizeMethod, | ||
bool | alignCorners, | ||
bool | halfPixelCenters | ||
) |
Definition at line 65 of file Resize.cpp.
References ARMNN_ASSERT, Bilinear, Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetIndex(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), NearestNeighbor, Resize(), and Encoder< IType >::Set().
Referenced by InferenceTestImage::GetSizeInBytes(), Resize(), and ResizeLayer::ResizeLayer().
OptimizationResult armnn::ReturnWithError | ( | OptimizationResult | res, |
const Layer * | layer, | ||
const BackendSettings & | backendSettings, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 587 of file Network.cpp.
References GetLayerTypeAsCString(), Layer::GetType(), OptimizationResult::m_Error, BackendSettings::m_PreferredBackends, and ReportError().
Referenced by AssignBackends(), and AttemptBackendAssignment().
|
inline |
Definition at line 134 of file ClWorkloadUtils.hpp.
References Error, error, and WrapClError().
Referenced by ClFillWorkload::Execute(), ClPadWorkload::Execute(), ClConvertFp16ToFp32Workload::Execute(), ClConvertFp32ToFp16Workload::Execute(), ClSubtractionWorkload::Execute(), ClAdditionWorkload::Execute(), ClQuantizeWorkload::Execute(), ClActivationWorkload::Execute(), ClRsqrtWorkload::Execute(), ClLstmFloatWorkload::Execute(), ClNegWorkload::Execute(), ClAbsWorkload::Execute(), ClExpWorkload::Execute(), ClPreluWorkload::Execute(), ClFloorFloatWorkload::Execute(), ClReshapeWorkload::Execute(), ClResizeWorkload::Execute(), ClGatherWorkload::Execute(), ClInstanceNormalizationWorkload::Execute(), ClSpaceToDepthWorkload::Execute(), ClMaximumWorkload::Execute(), ClMinimumWorkload::Execute(), ClBatchToSpaceNdWorkload::Execute(), ClNormalizationFloatWorkload::Execute(), ClArgMinMaxWorkload::Execute(), ClSliceWorkload::Execute(), ClL2NormalizationFloatWorkload::Execute(), ClComparisonWorkload::Execute(), ClDepthToSpaceWorkload::Execute(), ClStridedSliceWorkload::Execute(), ClSpaceToBatchNdWorkload::Execute(), ClDivisionFloatWorkload::Execute(), ClQuantizedLstmWorkload::Execute(), ClMultiplicationWorkload::Execute(), ClPooling2dWorkload::Execute(), ClLogSoftmaxWorkload::Execute(), ClSoftmaxWorkload::Execute(), ClBatchNormalizationFloatWorkload::Execute(), ClDepthwiseConvolutionWorkload::Execute(), ClFullyConnectedWorkload::Execute(), ClTransposeWorkload::Execute(), ClTransposeConvolution2dWorkload::Execute(), ClPermuteWorkload::Execute(), and ClConvolution2dWorkload::Execute().
void RuntimeLoadedNetworksReserve | ( | armnn::RuntimeImpl * | runtime | ) |
Definition at line 30 of file RuntimeTests.cpp.
References BOOST_AUTO_TEST_SUITE().
Referenced by BOOST_AUTO_TEST_CASE().
OptimizationResult SelectTensorHandleStrategy | ( | Graph & | optGraph, |
BackendsMap & | backends, | ||
TensorHandleFactoryRegistry & | registry, | ||
bool | importEnabled, | ||
Optional< std::vector< std::string > &> | errMessages | ||
) |
Definition at line 1434 of file Network.cpp.
References ARMNN_ASSERT, CalculateEdgeStrategy(), CalculateSlotOption(), CalculateSlotOptionForInput(), CalculateSlotOptionForOutput(), Graph::ForEachLayer(), Layer::GetBackendId(), OutputSlot::GetConnections(), Layer::GetNumOutputSlots(), Layer::GetOutputSlot(), Layer::GetType(), Input, ITensorHandleFactory::LegacyFactoryId, OptimizationResult::m_Error, Output, OutputSlot::SetEdgeStrategy(), OutputSlot::SetTensorHandleFactory(), and Undefined.
Referenced by BOOST_AUTO_TEST_CASE(), and Optimize().
void SetAllLoggingSinks | ( | bool | standardOut, |
bool | debugOut, | ||
bool | coloured | ||
) |
Definition at line 142 of file Logging.cpp.
Referenced by SimpleLogger< Level >::AddSink(), BOOST_AUTO_TEST_CASE(), ConfigureLogging(), and main().
|
inline |
Definition at line 66 of file ClWorkloadUtils.hpp.
Referenced by ClSliceWorkload::ClSliceWorkload().
|
inline |
Definition at line 45 of file ClWorkloadUtils.hpp.
Referenced by ClStridedSliceWorkload::ClStridedSliceWorkload().
void SetLogFilter | ( | LogSeverity | level | ) |
Definition at line 24 of file Logging.cpp.
References ARMNN_ASSERT, ARMNN_FALLTHROUGH, Debug, SimpleLogger< Level >::Enable(), Error, Fatal, SimpleLogger< Level >::Get(), IgnoreUnused(), Info, Trace, and Warning.
Referenced by SimpleLogger< Level >::AddSink(), BOOST_AUTO_TEST_CASE(), ConfigureLogging(), and main().
|
inline |
Definition at line 118 of file Logging.cpp.
References SimpleLogger< Level >::AddSink(), SimpleLogger< Level >::Get(), and SimpleLogger< Level >::RemoveAllSinks().
|
inline |
Definition at line 92 of file NeonWorkloadUtils.hpp.
References GetOutputTensorData(), and ITensorHandle::Map().
Referenced by NeonSliceWorkload::NeonSliceWorkload().
|
inline |
Definition at line 70 of file NeonWorkloadUtils.hpp.
Referenced by NeonStridedSliceWorkload::NeonStridedSliceWorkload().
std::vector<uint8_t> armnn::SetupQuantize | ( | float | value | ) |
Definition at line 1970 of file QuantizerTest.cpp.
References Float32, and TensorInfo::SetQuantizationScale().
Referenced by BOOST_AUTO_TEST_CASE().
void armnn::SetValueChecked | ( | Optional< T &> | optionalRef, |
V && | val | ||
) |
Definition at line 17 of file LayerSupportCommon.hpp.
References OptionalReferenceSwitch< std::is_reference< T >::value, T >::value().
Referenced by FalseFuncF16(), FalseFuncF32(), FalseFuncI32(), FalseFuncU8(), FalseInputFuncF16(), FalseInputFuncF32(), FalseOutputFuncF16(), FalseOutputFuncF32(), ClLayerSupport::IsConcatSupported(), NeonLayerSupport::IsConcatSupported(), ClLayerSupport::IsSplitterSupported(), and NeonLayerSupport::IsSplitterSupported().
void Slice | ( | const TensorInfo & | inputInfo, |
const SliceDescriptor & | descriptor, | ||
const void * | inputData, | ||
void * | outputData, | ||
unsigned int | dataTypeSize | ||
) |
Definition at line 14 of file Slice.cpp.
References ARMNN_ASSERT, TensorShape::GetNumDimensions(), TensorInfo::GetShape(), IgnoreUnused(), SliceDescriptor::m_Begin, and SliceDescriptor::m_Size.
void Softmax | ( | Decoder< float > & | in, |
Encoder< float > & | out, | ||
const TensorInfo & | inputTensorInfo, | ||
float | beta, | ||
int | axis | ||
) |
Computes the softmax function on some inputs, into outputs, with a shape given by tensorInfo.
Definition at line 17 of file Softmax.cpp.
References ARMNN_ASSERT_MSG, Decoder< IType >::Get(), TensorShape::GetNumDimensions(), TensorInfo::GetNumDimensions(), armnnUtils::GetNumElementsBetween(), TensorInfo::GetShape(), and Encoder< IType >::Set().
void SpaceToBatchNd | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | outputInfo, | ||
const SpaceToBatchNdDescriptor & | params, | ||
Decoder< float > & | inputData, | ||
Encoder< float > & | outputData | ||
) |
Definition at line 34 of file SpaceToBatchNd.cpp.
References Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), GetOffset(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), SpaceToBatchNdDescriptor::m_BlockShape, SpaceToBatchNdDescriptor::m_DataLayout, SpaceToBatchNdDescriptor::m_PadList, Encoder< IType >::Set(), and SpaceToBatchNd().
Referenced by SpaceToBatchNd(), and SpaceToBatchNdLayer::SpaceToBatchNdLayer().
void SpaceToDepth | ( | const TensorInfo & | inputInfo, |
const TensorInfo & | outputInfo, | ||
const SpaceToDepthDescriptor & | params, | ||
Decoder< float > & | inputData, | ||
Encoder< float > & | outputData | ||
) |
Definition at line 36 of file SpaceToDepth.cpp.
References Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), GetOffset(), TensorInfo::GetShape(), DataLayoutIndexed::GetWidthIndex(), SpaceToDepthDescriptor::m_BlockSize, SpaceToDepthDescriptor::m_DataLayout, Encoder< IType >::Set(), and SpaceToDepth().
Referenced by SpaceToDepth(), and SpaceToDepthLayer::SpaceToDepthLayer().
void Split | ( | const SplitterQueueDescriptor & | data | ) |
Definition at line 21 of file Splitter.cpp.
References ARMNN_ASSERT, Encoder< IType >::Get(), TensorInfo::GetNumDimensions(), TensorInfo::GetShape(), GetTensorInfo(), QueueDescriptor::m_Inputs, SplitterQueueDescriptor::ViewOrigin::m_Origin, QueueDescriptor::m_Outputs, SplitterQueueDescriptor::m_ViewOrigins, and MaxNumOfTensorDimensions.
Referenced by RefSplitterWorkload::Execute(), and Splitter().
void armnn::Splitter | ( | const SplitterQueueDescriptor & | data | ) |
Definition at line 17 of file Splitter.hpp.
References ARMNN_ASSERT, TensorInfo::GetNumDimensions(), TensorInfo::GetNumElements(), TensorInfo::GetShape(), GetTensorInfo(), QueueDescriptor::m_Inputs, SplitterQueueDescriptor::ViewOrigin::m_Origin, QueueDescriptor::m_Outputs, SplitterQueueDescriptor::m_ViewOrigins, MaxNumOfTensorDimensions, and Split().
void Stack | ( | const StackQueueDescriptor & | data, |
std::vector< std::unique_ptr< Decoder< float >>> & | inputs, | ||
Encoder< float > & | output | ||
) |
Definition at line 12 of file Stack.cpp.
References TensorInfo::GetNumDimensions(), TensorInfo::GetShape(), GetTensorInfo(), StackDescriptor::m_Axis, QueueDescriptor::m_Inputs, QueueDescriptor::m_Outputs, QueueDescriptorWithParameters< LayerDescriptor >::m_Parameters, and Encoder< IType >::Set().
constexpr bool armnn::StrEqual | ( | const char * | strA, |
const char(&) | strB[N] | ||
) |
Definition at line 148 of file TypesUtils.hpp.
Referenced by ParseComputeDevice().
void StridedSlice | ( | const TensorInfo & | inputInfo, |
const StridedSliceDescriptor & | params, | ||
const void * | inputData, | ||
void * | outputData, | ||
unsigned int | dataTypeSize | ||
) |
Definition at line 90 of file StridedSlice.cpp.
References TensorInfo::GetShape(), and numeric_cast().
|
inline |
Definition at line 36 of file Logging.hpp.
References Debug, Error, Fatal, Info, Trace, and Warning.
Referenced by DelegateOptions::SetLoggingSeverity().
void armnn::swap | ( | OriginsDescriptor & | first, |
OriginsDescriptor & | second | ||
) |
Definition at line 350 of file Descriptors.cpp.
References ViewsDescriptor::swap, and swap().
Referenced by FullyConnectedFloat32Test(), FullyConnectedLargeTestCommon(), BackendId::operator=(), BufferManager::Reset(), SquashEqualSiblingsImpl< Comparable >::Run(), and BackendRegistry::Swap().
void armnn::swap | ( | ViewsDescriptor & | first, |
ViewsDescriptor & | second | ||
) |
Definition at line 359 of file Descriptors.cpp.
References ViewsDescriptor::swap.
Referenced by swap().
void armnn::TestNetwork | ( | INetwork * | network, |
const TensorShape | inShape, | ||
const TensorShape | outShape | ||
) |
Definition at line 540 of file QuantizerTest.cpp.
References INetworkQuantizer::Create(), QAsymmS8, QAsymmU8, QSymmS16, QSymmS8, and VisitLayersTopologically().
Referenced by BOOST_AUTO_TEST_CASE(), TestNetwork(), TestQuantizeConvolution2d(), TestQuantizeDepthwiseConvolution2d(), TestQuantizeTransposeConvolution2d(), and ValidateFullyConnectedLayer().
void armnn::TestNetwork | ( | INetwork * | network, |
const TensorShape | shape | ||
) |
Definition at line 563 of file QuantizerTest.cpp.
References TestNetwork().
void armnn::TestQuantizeConvolution2d | ( | bool | useBiases | ) |
Definition at line 1073 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, Convolution2dDescriptor::m_BiasEnabled, IOutputSlot::SetTensorInfo(), and TestNetwork().
Referenced by BOOST_AUTO_TEST_CASE().
void armnn::TestQuantizeDepthwiseConvolution2d | ( | bool | useBiases | ) |
Definition at line 1120 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, DepthwiseConvolution2dDescriptor::m_BiasEnabled, IOutputSlot::SetTensorInfo(), and TestNetwork().
Referenced by BOOST_AUTO_TEST_CASE().
void armnn::TestQuantizeTransposeConvolution2d | ( | bool | useBiases | ) |
Definition at line 1819 of file QuantizerTest.cpp.
References IOutputSlot::Connect(), INetwork::Create(), Float32, IConnectableLayer::GetInputSlot(), IConnectableLayer::GetOutputSlot(), info, TransposeConvolution2dDescriptor::m_BiasEnabled, IOutputSlot::SetTensorInfo(), and TestNetwork().
Referenced by BOOST_AUTO_TEST_CASE().
void TopKSort | ( | unsigned int | k, |
unsigned int * | indices, | ||
const float * | values, | ||
unsigned int | numElement | ||
) |
Definition at line 24 of file DetectionPostProcess.cpp.
Referenced by BOOST_AUTO_TEST_CASE(), DetectionPostProcess(), and NonMaxSuppression().
void TransposeConvolution2dImpl | ( | const TransposeConvolution2dDescriptor & | descriptor, |
const TensorShape & | inputShape, | ||
Decoder< float > & | inputDecoder, | ||
const TensorShape & | outputShape, | ||
Encoder< float > & | outputEncoder, | ||
const TensorShape & | weightsShape, | ||
Decoder< float > & | weightsDecoder, | ||
Decoder< float > * | biasesDecoder | ||
) |
Definition at line 15 of file TransposeConvolution2d.cpp.
References Decoder< IType >::DecodeTensor(), Decoder< IType >::Get(), DataLayoutIndexed::GetChannelsIndex(), DataLayoutIndexed::GetHeightIndex(), DataLayoutIndexed::GetIndex(), TensorShape::GetNumElements(), DataLayoutIndexed::GetWidthIndex(), TransposeConvolution2dDescriptor::m_BiasEnabled, TransposeConvolution2dDescriptor::m_DataLayout, TransposeConvolution2dDescriptor::m_PadLeft, TransposeConvolution2dDescriptor::m_PadTop, TransposeConvolution2dDescriptor::m_StrideX, TransposeConvolution2dDescriptor::m_StrideY, NHWC, Encoder< IType >::Set(), and BaseIterator::SetIndex().
Referenced by RefTransposeConvolution2dWorkload::Execute().
bool armnn::TrueFunc | ( | Optional< std::string &> | reasonIfUnsupported, |
Params &&... | params | ||
) |
void armnn::ValidateFullyConnectedLayer | ( | const bool | biasEnabled | ) |
Definition at line 1032 of file QuantizerTest.cpp.
References CreateNetworkWithFullyConnectedLayer(), and TestNetwork().
Referenced by BOOST_AUTO_TEST_CASE().
|
inline |
Definition at line 157 of file ClContextSchema_generated.h.
References ClContextIdentifier().
|
inline |
Definition at line 162 of file ClContextSchema_generated.h.
References ClContextIdentifier().
|
inline |
Definition at line 309 of file TypesUtils.hpp.
References TensorInfo::GetDataType(), GetDataTypeName(), and TensorInfo::GetShape().
Referenced by ParserFlatbuffersFixture::CheckTensors(), ParserFlatbuffersSerializeFixture::RunTest(), and ParserFlatbuffersFixture::RunTest().
void armnn::VisitLayers | ( | const LayerContainer & | layerContainer, |
ILayerVisitor & | visitor | ||
) |
Definition at line 50 of file NetworkQuantizerUtils.hpp.
References ILayerVisitor::FinishVisit(), and ILayerVisitor::StartVisit().
Referenced by NetworkQuantizer::OverrideInputRange().
Definition at line 73 of file QuantizerTest.cpp.
References ApplyStrategyToLayers(), and INetwork::pNetworkImpl.
Referenced by BOOST_AUTO_TEST_CASE(), PreserveTypeTestImpl(), and TestNetwork().
|
inline |
Definition at line 126 of file ClWorkloadUtils.hpp.
References Exception::what().
Referenced by ClWorkloadFactory::AfterWorkloadsCreated(), and RunClFunction().
constexpr bool g_AggregateProfilingEventsByInference = true |
Definition at line 38 of file Profiling.cpp.
const float g_AsymmS8QuantizationBase = 255.0f |
Definition at line 31 of file QuantizerTest.cpp.
Referenced by BOOST_AUTO_TEST_CASE(), and GetInputTensorInfo().
const float g_AsymmU8QuantizationBase = 255.0f |
Definition at line 29 of file QuantizerTest.cpp.
Referenced by BOOST_AUTO_TEST_CASE(), and GetInputTensorInfo().
constexpr std::size_t g_ProfilingEventCountHint = 1024 |
Definition at line 30 of file Profiling.cpp.
const float g_SymmS16QuantizationBase = 32767.0f |
Definition at line 33 of file QuantizerTest.cpp.
Referenced by BOOST_AUTO_TEST_CASE(), and GetInputTensorInfo().
const float g_SymmS8QuantizationBase = 127.0f |
Definition at line 32 of file QuantizerTest.cpp.
Referenced by BOOST_AUTO_TEST_CASE(), and GetInputTensorInfo().
const float g_TestTolerance = 0.000001f |
Definition at line 34 of file QuantizerTest.cpp.
Referenced by GetInputTensorInfo().
constexpr bool g_WriteProfilingEventSequence = true |
Definition at line 33 of file Profiling.cpp.
constexpr bool g_WriteReportToStdOutOnProfilerDestruction = false |
Definition at line 42 of file Profiling.cpp.
constexpr unsigned int LOWEST_CAPTURE_PERIOD = 10000u |
The lowest performance data capture interval we support is 10 miliseconds.
Definition at line 21 of file Types.hpp.
Referenced by BOOST_AUTO_TEST_CASE(), and PeriodicCounterSelectionCommandHandler::operator()().
constexpr unsigned int MaxNumOfTensorDimensions = 5U |
Definition at line 18 of file Types.hpp.
Referenced by BOOST_FIXTURE_TEST_CASE(), armnnTfLiteParser::ComputeWrappedIndex(), Concatenate(), CopyTensorContentsGeneric(), TensorShape::IsAtLeastOneDimensionSpecified(), TfLiteParserImpl::OutputShapeOfReshape(), PermutationVector::PermutationVector(), armnnUtils::Permuted(), Split(), Splitter(), armnnDeserializer::ToTensorInfo(), and armnnUtils::TransposeTensorShape().
const std::set<armnn::LayerType> paddingRequiredLayers |
Definition at line 16 of file NeonTensorHandleFactory.hpp.
Referenced by NeonTensorHandleFactory::GetCapabilities().
thread_local IProfiler* tl_Profiler = nullptr |
Definition at line 485 of file Profiling.cpp.
Referenced by ProfilerManager::GetProfiler().