Age | Commit message (Collapse) | Author |
|
Added SHAPE operator to the supported operators report.
Updated the constraints for QUANTIZE and SHAPE operator.
Also fixed RESHAPE consuming statically optimised shape.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: I1d964d602d3f361a0f16dae8133197280dd84c48
|
|
*Quantise op becomes constant if input is known at compile time
*Quantised values calculated if input of op is const and float
*Const inputs to quant op that are int are requantized
Change-Id: Ic94a72a392af709fe6a640d7dacbb5dc2334f16f
Signed-off-by: Ayaan Masood <Ayaan.Masood@arm.com>
|
|
*Shape OP value is available at compile time hence
it can be optimised
*Disconnected shape OP at compile time from parent
tensor
*Transformed shape OP tensor into constant
Change-Id: I0a024269e2b592c6146dd72e62d7a41951fb727a
Signed-off-by: Ayaan Masood <Ayaan.Masood@arm.com>
|
|
Removing constraint for negative alpha value in ReLu
for int8 and uint8.
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
Change-Id: Id7a3a30bf5d1f0a591f990bd04cd0dbbad5819c6
|
|
*Added generic function which checks if underlying shape of
FullyConnected operation is 2D and performs shape reduction
*Fully connected operation >2 dimensions now run on NPU if the above
case is satisfied
*constraint_fc_output_2d and rewrite_fully_connected_input refactored
*Added unit test to confirm this functionality
Signed-off-by: Ayaan Masood <Ayaan.Masood@arm.com>
Change-Id: I0e29c767e5b84841eb53bbc44464b36a454f7b38
|
|
Update version of Black to 22.3.0 due to updated dependencies.
Updates to fix reported issues due to new version.
Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com>
Change-Id: I60056aae452093ce8dcea1f499ecced22b25eef1
|
|
Fixed a crash caused by loading a network containing
operators with empty constant tensors.
This could occur when a branched network is split
before said branches have converged.
We now put the affected operator on the CPU.
Signed-off-by: erik.andersson@arm.com <erik.andersson@arm.com>
Change-Id: I63e9cd13cecf86d976c5750c727e218c334c32b5
|
|
- The failing tests contain operations with dynamic tensors which
are not supported and therefore they should be placed on the CPU.
However, a bug in the removal of RESHAPEs which contain a dynamic
shape prevented this happening.
- This change adds a check to make sure that RESHAPE ops with a
dynamic shape tensor are not removed and instead are placed on the
CPU.
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I2d7481f7f80f99a0f01df100d956933777e6875a
|
|
This commit fixes a number of bugs where per-axis
quantization would make Vela crash and would not
be properly recognized.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: I50a461d200274b43ec76f3a7357bf66db6d49964
|
|
Memory only operators such as Reshape, Squeeze and ExpandDims are
removed in the graph optimiser step.
- Added semantic check that memory only operators have same
quantisation parameters on ifm/ofm.
- Added support for the ExpandDims operator.
- Addition and cleanup of related unit tests.
- Removed TOSA from the generated SUPPORTED_OPS.md documentation.
Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com>
Change-Id: If848d8afc58c18806e10997ed94e4dae83f30879
|
|
Refactor supported operators by breaking out model semantics
into its own class. Model semantics checked right after model
read.
Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com>
Change-Id: If442b189efcd91dda01af60b2b3adedfacdf2fad
|