Age | Commit message (Collapse) | Author |
|
All existing constraints have now been refactored using the new
framework.
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: Ic9ba0d7040cb9f114b959a949bfdf777f86752c7
|
|
Added a supported_operators check for Relu activation functions. If the
scaling value overflows to infinity, it will be placed on the CPU.
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com>
Change-Id: I66b7bec062599609aadcbb7531caebbc45a7451f
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com>
|
|
- Fixed and documented both tensor and quant params scaling checks
- Added quant params validity check and tensor quantisation check
- Added valid tensor checks to some graph optimisation functions
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I8d6e8f03a603d28886dde511672c8399c85b794c
|
|
Using a new system to report constraints, replaced existing
functionality for checking conv-like ops.
This new system will allow reporting of all constraints regardless of
any input network.
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: If81177deca2a3b57c9dd9a3a08868cbc9cef0c23
|
|
Suppress info print that Const/Placeholder/SubgraphInput are not supported
on the NPU.
Change-Id: I6f323b64185b01b619b584c1473ae61d010ab3a4
Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
|
|
This reverts commit 04986c0016e59993563490fe67052371fc0e1ad2.
Reason for revert: Merged by mistake
Change-Id: I150ad9ba7074ad1e80f21180aeba56a454d9f748
|
|
Suppress info print that Const/Placeholder/SubgraphInput are not supported
on the NPU.
Change-Id: I689d25481df0cd10487484c9f639e4253df081ee
Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
|
|
Keeping the constraint functions consistent with each other
Added specific tensor names in the extra info
Added operator name to the warning generated
This should help easily identify specific problematic nodes in a graph
and give a good enough explanation as to why they are placed on the CPU
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: Ie5bbdd31e5e75fe37e3d8bb8fee1d260080bce83
|
|
Added info print for unsupported operator
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I1002d1c2249661bff17ef86d9500d1aeb2a1e38e
|
|
Vela supports batching of FC, restriction removed.
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: Ica56738f1b2676628644fc44f2039a24807f5ccb
|
|
This commit changes and amends some parts of the
restriction functions in order to make sure
operators are correctly placed.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: I336cf33a874c9078a5bbf81ce129ff917dbc5e9a
|
|
- op.type is now an enum instead of a string
- Removed unused operator codes
- Refactored some attributes like npu_block_type, fused_activation_function
- Refactored operator index calculation
- Refactored a number of operator sets
Change-Id: I641f65ee375794b7aec42abc0664251ae37d78e8
Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
|
|
Fix issue for checking axis in concat, now allowing 0.
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I85a5fc3dacdfc66dc01b0e05048dd100254fddff
|
|
A new mechanism to report generic restrictions/constraints for
operators has been implemented.
Each check is its own defined function, and has a general reason for
the constraint defined as its docstring.
This allows us to query all reasons up front and report this without
having to run through real data to trigger the checks.
This is part of a larger refactoring and the specific restrictions will
be replaced by a similar mechanism.
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: Id3fb2639f91cfac5fc5b8c14f7620de1a85972b2
|
|
Part of larger refactoring. The sets of operators do not need to be
instance attributes and are not expected to be modified at runtime.
This in turn allows almost all functions to become class methods.
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: I7dc24d65cdd6c4bda641b3d6133b3134302a552f
|
|
Min and max operations was not passed through
the checking of elementwize OPs in the supported
operator checking.
Changed so they are passed through this check as well.
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I358a121de33882802415d97d9ed5dbee53233f77
|
|
Fixed crash in networks with 5D tensors.
Fixed crash for (int32) tensors without quantization.
Added validity checks for concatenation.
Moved unfusing of activation function from tflite_reader to graph_optimiser.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: Ib9ba8891dc95ef5491e15d0feedef44331a26393
|
|
Added check for inf numbers for all scales.
Signed-off-by: Andreas Nevalainen <andreas.nevalainen@arm.com>
Change-Id: I84fcae429be4869d8489f66bef26863c254104cd
|
|
Updated supported operator checks for StridedSlice:
- allow negative indices in begin/end values
- added more checks on shapes
Change-Id: I3ac76bfa6b313f0e2250f0749f152fb0e3aa033c
Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
|
|
Change-Id: I750ec63a0e37b38feaf4cbdcc883fdbef92bccdf
Signed-off-by: Andreas Nevalainen <andreas.nevalainen@arm.com>
|
|
- Added check for non-constant weights in supported operators
- Added check ifm & ifm2 shapes
- Handle None tensors for CPU operators
- Handle missing attributes for Cast operator
Signed-off-by: Andreas Nevalainen <andreas.nevalainen@arm.com>
Change-Id: I2f16d3d44d0c6da5237550b39273cdb9cc3c7607
|
|
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: I3c3ed73a6db39615ddf5987dc5696b6b09682be0
|
|
Added batching to softmax by reshaping the input.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: I0b516f9bf2410fb86372b229beba4a7280c498cc
|
|
For SplitV sizesplit can contain one -1 indicating that
dimension is to be inferred.
Support added to handle this.
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: Ib9fc8dd2ee1749e81a978d85f2d4a016698bb441
|
|
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com>
Change-Id: I04618fd0d29075e7d3f8f27a320129603f045163
|
|
- Fixed bias check to use quantised values.
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I6d87439938b9b5aeec87814e5a30d59fd06d5748
|
|
Allows int64 data type to be used as long as all values can be packed
into a int40 value.
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com>
Change-Id: I0e25ec482e3ea765a5fd00bcf7e212a9e65a1461
|
|
Added checks for not using NHCWB16 for reduce_sum int32 which makes
int8/uint8 softmax work.
Also enabled softmax graph rewrite by default and fixed a saturation
problem.
Change-Id: Ic01bd9ece7e5c3edb2900b7915cc747efe9e5760
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
|
|
Updated kernel size check, width and height was swapped
and added weight sum check.
Signed-off-by: Andreas Nevalainen <andreas.nevalainen@arm.com>
Change-Id: Idb18cf258ac19b3a0d71134dab5a117bcd778b59
|
|
This commit fixes a bug wherein Split operators
are being erroneously placed on the CPU due to
a 0-dimensional input that disqualifies it from
NPU placement; a restriction introduced in a
recent commit.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: I83c047ddf071d662343087c69bdb2a014dd209c3
|
|
Signed-off-by: Charles Xu <charles.xu@arm.com>
Change-Id: Ida307afc33cd7963bdeb505df400732a3efcc846
|
|
- Fixed bug with the supported operator check rejecting operators based
upon an incorrect comparison of the tensor quantisations
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: Ibd0eb50077465d2c515c6ee10394d9b43cdf730c
|
|
Includes a number of changes:
* Handle non-existing optional inputs
* Handle disabled optional inputs (-1 indexed)
* Added unit tests for parsing operators
* Add bias tensor to the different Convolutions + FullyConnected if
it's missing.
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com>
Change-Id: Ib88d2b610314b1c886fc0aef4f9da87430ce6ae5
|
|
Implemented LUT generation for softmax uint8/int8 to match the
reference.
Change-Id: Ib9acaa295ee1066591e800023d75f364520b44c1
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
|
|
Vela often fails when encountering operators that have
inputs or outputs with shape == []. Only for elementwise
ops where shape is broadcasted from IFM2 to IFM1 is this
supported.
This commit adds a restriction which places ops with
shape [] tensors on the CPU except in the special case
of broadcasting for elemwise ops.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: I5b0855233e3b83870209f4da00fb2dbd0184fee0
|
|
Added graph rewrite of Softmax for uint8/int8.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: Iecdd5d2cd3156a601b3313debba4a3562e6be5d7
|
|
A valid strided slice should have (positive) non-zero elements
when you do "end - begin"
When encountering an invalid strided slice, vela asserted.
This now checks that it is valid and wont claim support if it isnt.
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: I33ef118bd6a31ac78c680acb5229ff31b0809d6a
|
|
Signed-off-by: Charles Xu <charles.xu@arm.com>
Change-Id: Ibd0cd152fbc46dea0c92fd1bf7da1ffc9803fdba
|
|
Added graph rewrite of Softmax for int16.
Change-Id: Id7885af6056a23e8b8362fb61ae94283251eb398
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
|
|
Signed-off-by: Charles Xu <charles.xu@arm.com>
Change-Id: I44428d77b2e8e44a477e5c4dfe28ab8dd1792838
|
|
This commit adds a quantization restriction check
for supported operators, so that operators with
different quantization between its IFM (1/2) and
OFM tensors that do not support it, are correctly
placed on the CPU.
The quantization between two tensors is compared
using a new equality function implemented for
the QuantizationParameters class.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: I70ff36b4ab4955f328d6e6e699f00dbc43c0404a
|
|
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com>
Change-Id: Iaf4d7ab9c32b0d783072c5f131a61bfebe77cc16
|
|
- No functional change
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I5ab1198b9d092cd041fa9b85b2dee9900d299bfc
|
|
This commit places LeakyReLU operators with
a negative alpha value on the CPU and avoids
a crash during command stream generation.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: Iac68c5a9fdbf26facb709660965615b2b5b551f9
|
|
Change-Id: Ie6d8d6de9f3447f19ba06aafa9fa480fc96a973b
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com>
|
|
This commit fixes the failing assert by removing it
and instead placing unsupported ResizeBilinear
operators on the CPU.
It introduces a new graph optimisation function
which adds the necessary attributes as well as
new operator restrictions for ResizeBilinear.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: I2feffd0b5a2169ebffbe4f165e450b3f2d140380
|
|
Updated supported operator checks according to latest requirements.
Change-Id: I79708d8039e464e39818d3c09e61f3f533e96f3d
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
|
|
Write the constant scalars into flash. In case it's Dram
or OffChipFlash, DMA the scalars from flash to sram.
Signed-off-by: Charles Xu <charles.xu@arm.com>
Change-Id: I42300a05dfe968d623b8aec8549644549e0f54b5
|
|
The tensor is split into len(size_splits) along the dimension
axis with the sizes specified in the size_splits array.
Change-Id: I2ce98fa10e2e26f16cfd86a775aee94a308509ea
Signed-off-by: Charles Xu <charles.xu@arm.com>
|
|
Also updated README.md
Change-Id: I118309c61f4d00e8508d6b888c606995490fba39
Signed-off-by: Diego Russo <diego.russo@arm.com>
|