Age | Commit message (Collapse) | Author |
|
Added SHAPE operator to the supported operators report.
Updated the constraints for QUANTIZE and SHAPE operator.
Also fixed RESHAPE consuming statically optimised shape.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: I1d964d602d3f361a0f16dae8133197280dd84c48
|
|
Fixed static optimisation of Quantize operator by running unsupported
formats on CPU. Also added support for int16 and corrected the
calculation.
Change-Id: I861c712aa6258dba53fcf4d5dae45d1d416e6141
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
|
|
*Quantise op becomes constant if input is known at compile time
*Quantised values calculated if input of op is const and float
*Const inputs to quant op that are int are requantized
Change-Id: Ic94a72a392af709fe6a640d7dacbb5dc2334f16f
Signed-off-by: Ayaan Masood <Ayaan.Masood@arm.com>
|
|
*Shape OP value is available at compile time hence
it can be optimised
*Disconnected shape OP at compile time from parent
tensor
*Transformed shape OP tensor into constant
Change-Id: I0a024269e2b592c6146dd72e62d7a41951fb727a
Signed-off-by: Ayaan Masood <Ayaan.Masood@arm.com>
|
|
*Added generic function which checks if underlying shape of
FullyConnected operation is 2D and performs shape reduction
*Fully connected operation >2 dimensions now run on NPU if the above
case is satisfied
*constraint_fc_output_2d and rewrite_fully_connected_input refactored
*Added unit test to confirm this functionality
Signed-off-by: Ayaan Masood <Ayaan.Masood@arm.com>
Change-Id: I0e29c767e5b84841eb53bbc44464b36a454f7b38
|
|
- This is due to calling range() on a non-integer value which in turn is due
to a change in the behaviour of round() on numpy.float64 values
- The fix is to always force the output of the round() to be an integer and
thereby stop whole number floating point values propagating into the kernel
dimensions which later feed into the range().
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: Ic75cb6ba85a90c81c1d762067d89a10caaa13b92
|
|
Update version of Black to 22.3.0 due to updated dependencies.
Updates to fix reported issues due to new version.
Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com>
Change-Id: I60056aae452093ce8dcea1f499ecced22b25eef1
|
|
- Fixed a bug due to ResizeBilinear modifying the attributes of a
shared IFM
- The ifm_resampling_mode is now an attribute of an operator rather
than a tensor
- Changed all calls to try_block_config() to use the attribute rather
than recalculating it in multiple places
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I4641e9cd6b049bd4186776d98e3e751c5e5bcc06
|
|
Added check that horizontal padding is unaffected when applying
graph optimization "optimise_strided_conv".
Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
Change-Id: I7032a44163e300cdf62cf615b4b10a1417e38eaa
|
|
- Extend ifm/ofm dimensions explicitly in mean op
This fix a bug when ifm/ofm shape has different dimensions
e.g. IFM=1x19x18x25 axis=2 OFM=1x19x25,
the ofm_shape should be 1x19x1x25, not 1x1x19x25
- Fix wrong weight shape
Change-Id: I269eb71ea56c09deee2aa6c6433d9b2baa98a113
Signed-off-by: Diqing Zhong <diqing.zhong@arm.com>
|
|
- The bug is that TransposeConv does not support explicit padding
which is needed in order to combine it with a proceeding Pad op
- The fix is to exclude such combination
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: Ide03d034dc32b5fc9bcaaf291ab713482223a042
|
|
The root cause of this diff is precision errors caused by rounding
several times when performing a resize bilinear upscaling to more than
twice the initial size. This is solved by rewriting the algorithm to
perform nearest neighbour upscaling to the correct size and then
applying one larger average pool instead of several 2x2 pools. Avgpool
with padding is limited to kernel size 8x8, which constraints the
largest possible bilinear upscaling to 8 times the input size.
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
Change-Id: I846232f309ba26aab6c385e593cbe25b646c6668
|
|
- This bug causes a regression in the use of unpack and split operators
- The bug is due to the read_shapes attribute being an absolute calculation
for slice and strided_slice, but a relative one for unpack and split
- The fix is to consistently treat the attribute as a shape relative to the
read_offset
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I4504b161be507ea22ca6ee40fbe7808bfe049405
|
|
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
Change-Id: I87dc5963972a7ef91db467b2ff8e0261e9899372
|
|
Fixed by adjusting zero points for ops with int8 IFM and asymmetric weights
since the reference does not support asymmetric weights for int8 IFM and
ignores the zero points.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: I2a206a01a471a53aa864a6a3616aa23d2a5a23c8
|
|
Change convert_pad optimiser to use op.ifm_shapes attribute in place of
the fickle op.ifm.shape (which in this case had changed due to the
optimised-out reshape)
Signed-off-by: James Ward <james.ward@arm.com>
Change-Id: I13fbd846ac8d3342afd7844d1041cfa15aaae124
|
|
Removed graph optimizations no longer needed that caused problems
with FullyConnected operators running on CPU being consumed by
elementwise operators in Vela.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: Ic7e66141ccd5e9aa8f0022c5ab9e7fd1ba3f6786
|
|
Added support to map TABLE operator to LUT.
Limitations:
-Only supported for int8
-TABLE input must be constant
This also adds the support for TFLite legalisation of
Tanh/Sigmoid (int8/uint8).
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I1a95f61fb02fdd42c4a690494418cc0765c8b275
|
|
Memory only operators such as Reshape, Squeeze and ExpandDims are
removed in the graph optimiser step.
- Added semantic check that memory only operators have same
quantisation parameters on ifm/ofm.
- Added support for the ExpandDims operator.
- Addition and cleanup of related unit tests.
- Removed TOSA from the generated SUPPORTED_OPS.md documentation.
Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com>
Change-Id: If848d8afc58c18806e10997ed94e4dae83f30879
|
|
Fixed scaling for RELUs with different IFM/OFM scaling.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: I0ac96326b3960c0fb025b885e06a259d24b2e684
|
|
Added support for Data layout ops
RESHAPE, SLICE and CONCAT.
-No support for bool_t
-Support limited to Rank <= 4 and N = 1
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I487ac494b6506a2a6ba947ee758aa193194dd796
|
|
This is mainly to add support for depthwise conv2d
with dephmultiplier = 1.
(But there are no testcases suited, all I have sourced
has depth_multiplier set to 2, which is not supported.)
-Added support for depthwise conv2d.
-Added support for removing Transpose of constant data
-Added support for removing reshape
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I143e6246becfa78fd9f7510af0bf0d6b3fbbf2c7
|
|
Fixed output diff for wav2letter int16 by correcting the scaling
used for LeakyRelu.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: I8be1e14c25d223dc6e42c4ec498ff4d3d9de65d7
|
|
Added support for
-AVGPOOL and CONV2D with TFLite correspondence
-MAXPOOL
-additional support for replacing RESCALE ops with avgpool.
No support for breaking down tensors over the
size supported by NPU.
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I1d2aa50ac30a26283b3e6f1fe88cba1544b7c189
|
|
Update to handle the case when the Squeeze Op ifm/ofm are the
subgraph ifm/ofm, to facilitate the removal of the Squeeze Op.
Adding NOP to maintain the original tensors.
Updated pytests for squeeze operator.
Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com>
Change-Id: I623cae05e696fb16ccf29dedc42fd822601e9fd9
|
|
Fix inception_v1/v3 output diffs.
Removing the Squeeze operator in the graph optimisation step.
The squeeze operator removes dimensions of size 1 from tensor shape.
The memory layout is preserved.
Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com>
Change-Id: I4ceffcbb141af5ed50b0d1a9d1d67622e638c2a1
|
|
- Fix bug with MEAN ops calling create_const_tensor using the
quant_value_dtype keyword argument.
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I8cff542ae840fb110ea97c0cc86bb761d5a884d3
|
|
Refactor supported operators by breaking out model semantics
into its own class. Model semantics checked right after model
read.
Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com>
Change-Id: If442b189efcd91dda01af60b2b3adedfacdf2fad
|
|
Remove quant_values attribute from Tensor class.
It only needs a single values attribute, holding either
quantized or unquantized values as appropriate.
Change-Id: Ie96f80ac58061b6077e0f7048dc60209fdfbcafa
Signed-off-by: James Peet <james.peet@arm.com>
|
|
Added basic TOSA support, enabling Vela to
read and compile a .tosa file corresponding to
CONV2D + Rescale + Clamp, and writing it to an
optimized .tflite file.
The optimized .tflite file, will in this case, hold
a commandstream where the Rescale and Clamp has been
fused into the CONV2D.
The optimized tflite file is not output from Vela.
-Added support to read .tosa file into Vela
internal structure.
- Added tosa_reader.py, tosa_mapper.py and
helper files stored under tosa/
- Support for this limited to ~10 ops
-Added reader_util.py for functions common
for TOSA and TFLite
-Added tosa_graph_optimiser.py
-Added support to fuse Rescale into convolution
-Modified handling for padding
-Added support to fuse Clamp to previous op
-Added graph_optimiser_util.py
-Moved functions common for TOSA/TFLite graph
optimization to this file.
-Renamed graph_optimiser.py to tflite_graph_optmiser.py
-Added separate tosa_supported_operators.py
-Added supported_operator_util.py
-For functions in common for TOSA/TFLite
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: Ic3c540504ec8c5eb4771397fdc6882050ecf33ab
|