Age | Commit message (Collapse) | Author |
|
- Added release information
Change-Id: I6d6d80460658d444d52d0abb17a2cb42954f992c
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
|
|
- Fixed output diff for FullyConnect int16
- Problem was that wrong rounding mode was used
- Reference uses Natural rounding for FullyConnect int16
Change-Id: I209313b6f89fed01678a448a935d5f6904b41057
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
Change-Id: I1458009f4b92c1a599efa3a63d6768148e55606d
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
|
|
Change-Id: I3c13118e14195a5fb8e522a38b205b75fb07b74b
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
|
|
- A Pack op is implemented by several AvgPool ops. Depending
on number of CPU ops and graph topology this could result in that
the AvgPool ops ended up in different nodes. One of these node
had the Pack output referenced to it but the other node did not.
As a result the full graph was not traversed when calculating CPU
ops.
- The compiled network works as intended but the number of
reported CPU was wrong.
- Added new method that extracts the ops using the passes in
the sub graphs which fix the problem.
Change-Id: Ie88ebd4669783559258ae763737a4c7f86c905f8
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
Fixed a problem where the compiler incorrectly called the
mlw_codec to create an empty weight stream for the second
weight core.
Also added code to the mlw_codec to detect this as an value
error rather than a memory error.
Change-Id: I463846cecb1178f8fbf04dc3e39bd6965cb8ddfc
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
|
|
- Small improvement that reduces compilation time
Change-Id: I9e5cd58674f719f5dedeb30ea42787dc996a22d6
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
as inclusive"
This reverts commit dbe4df4ccddafac9cbc345a4a03a42c241248e88.
- The previous patch had a mostly negative effect on performance
Change-Id: I4003d50b07de9c63d9001ceb0a3a0bc966c0b861
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
- Removed the unused function get_block_config_for_npu_op()
Change-Id: If36e4fe65286c4e13e127473d20971a1b6eaa94b
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
Added missing memory allocation checks to mlw_codec.
Change-Id: I20c04d5d9c934b9c715a2b2049705f853d90825a
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
|
|
Adds support for setting the accumulator type using the quantized_bias_type attribute
Change-Id: Ibde1149143b510a1c650a5a037d3ab92d878d7cd
Signed-off-by: William Isaksson <william.isaksson@arm.com>
|
|
inclusive
- The issue is that live range start and end times are inclusive
but the function to calculate is two ranges overlap treats them as
exclusive
- The fix is to change the comparison to be inclusive
Change-Id: Iab5ceec7be2a5fdf0d6ecef81509a88c74e7108c
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
- If a npu op is followed by a convolution op with dynamic
weights the optimized file ends up containing a duplicated
tensor called _cpu.
- Another problem is also that an empty bias tensor is added
in the reader.
- The fix is to ignore these cpu ops both in the reader
and the writer.
Change-Id: I476b4f6062e26cca4ba589df694a99ef79b0f6d4
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
Updates to TensorFlow 2.15. No StableHLO operators were added to Vela since these are subject to change and have almost no runtime support.
- FlatBuffers version was unchanged.
Change-Id: I9a506a2dcc2e0bc2498742e857bbb6d69b19ac1b
Signed-off-by: William Isaksson <william.isaksson@arm.com>
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
|
|
- The issue was that the peak memory usage was only evaluated at the
start of the tensor's lifetime and not across its whole lifetime
- The fix is to look for the maximum usage between start and end
Change-Id: Iff4f390f3a017f1df0f8933796fa5282db7870db
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
- Documents Legality requirements of CMD1 payloads
- Fixes a miss in the command stream checks.
Signed-off-by: William Isaksson <william.isaksson@arm.com>
Change-Id: I9b33dedfa66650fa3100f61fd158a385818b4d52
|
|
- Added release information
- Modified SUPPORTED_OPS.md version info
Change-Id: I3ead55db45c84821c426645e488dfb765166d20f
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
- Updated TensorFlow Support section
Change-Id: Ic2551f44e7dfa996a5dcc8840d480b7985415a0a
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
- Changed homepage link from cgit to gittiles
- Clarified tensor alignment is in Bytes
Change-Id: I9fd912c17d61f9add11493e031bbb620271c68eb
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
- Changed deprecated method of getting package version info
- Updated pylint version to be Python 3.11 compatible
Change-Id: I68aae2155098c834653d404c78acf8df86eb88f8
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
- Updated example to --cpu-tensor-alignment in OPTIONS.md
Change-Id: Id0b74a9aac4dd4384a4b7c74eea743c29c3c8e5e
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- Added missing constraint message for stride height by
adding the constraint_stride_width_no_upper_limit to AVERAGE_POOL_2D
Change-Id: Ib716fb19e44cb8735b52270b557998d4cbf5cb1c
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- Added semantic checks for Transpose
- Added unit tests for semantic checks
- Updated SUPPORTED_OPS.md
Change-Id: I3fcf13120f4b6811f8de27711996cdb9c19c9847
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- Added graph optimiser function to convert TRANSPOSE op
into an AvgPool op with swapped stride for height and width
- Added TRANSPOSE supported op check
- Added unit tests for TRANSPOSE supported op check
- Updated SUPPORTED_OPS.md
- Fixed problem in pass packing when optimizing the pass list.
Old problem, but now seen when moving TRANSPOSE from cpu.
Change-Id: I0a0ef420b0fb8241090c2e2434622881105cde15
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- When adding extended stride support for CONV_2D a
regression was introduced for AvgPool causing an
output diff for a particular test case.
- The reason was that the logic for forcing the
zero point to zero when generating the cmd stream
did not have a check for explicit padding.
- Updated logic to also include check for explicit
padding.
Change-Id: Iee4893a83a05279e592fe230f4d66d9c9ddb3e05
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
The constraint check for the IFM/IFM2/OFM strides were coded
according to an incorrect version of the specification.
Changed the check to verify that the strides are a multiple
of 16 bytes. Also changed the wording in the exception message
to clarify if it is a stride or value violating the constraint.
Test case had two stride settings violating the constraint,
after this change one of them still fails the check, so
no change to tests, except in comments clarifying what is
being tested.
Change-Id: I93815d8bb08303b5f747c947c0bbd461b12895e3
Signed-off-by: Björn Davidsson <bjoern.davidsson@arm.com>
|
|
- A reshape followed by an activation function was converted
to a Memcpy with fused activation. The problem is that Memcpy
does not support activation so no activation was executed.
- Added logic to prevent activation functions to be fused
with the Memcpy.
Change-Id: Ibc7d985e5037146dd1f6cb2601407d0f8b865ac6
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- Added support for stride_h > 3 when ofm height is 1
- Added support for stride_w > 3 when ofm width is 1
- Updated constraints
- Updated tests
- Updated SUPPORTED_OPS.md
Change-Id: I8f89909b05a0f052df5f03702966cee50da61cfc
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
Update max_outstanding_kernels to 2 and remove unit tests expecting
values of 2 or 3.
Change-Id: Ib8a3a88d3378d3ce84427935c91c7a46f04bc9ab
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
|
|
- Update to TensorFlow 2.14 and minimum required Python version to 3.9.
- Update version pins on NumPy and FlatBuffers.
- Add constraint to Offset attribute of StridedSlice operator
Change-Id: I8c7122def963202e5f47e92b62be607935ed05cf
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
|
|
The operator mapping for the RANDOM_UNIFORM operator was missing the
seed and seed 2 options which resulted in those options being removed
when the operator was passed through Vela.
Change-Id: I8469c239ec1d20d775c31a52e4954baf159643f2
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
|
|
Markdown's git reporitory has moved to different location.
Change-Id: Iae401c1d283d937347cbce546836470647333201
Signed-off-by: Johan Gunnarsson <johan.gunnarsson@arm.com>
|
|
- Fixed a regression where DepthWiseConv used in argmax int64
had the wrong shape.
- The error was introduced when adding support for a new operator
that changed the weight shape for the cast utility function. That
change only worked because reorder_depthwise_weights was called
later. Since argmax is converted after reorder_depthwise_weights
the cast operator in argmax got the wrong shape.
- The fix is to set the correct weight shape in the cast operator
and then mark that the weights already have been transposed correctly.
Change-Id: I61f5694f078cfcaf0d46d43faead6eb7e0a23ade
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
Update to 23.1.21
Change-Id: I2a9aaa7cbb725c2f417b87577a1f8d6ad4697d76
Signed-off-by: William Isaksson <william.isaksson@arm.com>
|
|
- Added SQUARED_DIFFERENCE support
- Updated SUPPORTED_OPS.md
Change-Id: Id83d9d92129e645390c7979759dfdeff7a14c2ee
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
Only set stride to (1, 1) if kernel, stride and IFM shape all are
equal. And also set padding to VALID to handle ops with SAME padding.
Signed-off-by: Johan Gunnarsson <johan.gunnarsson@arm.com>
Change-Id: Id3cc34686f09667ea21541fac432351555344e3d
|
|
This fixup is not relevant for Resize ops.
Signed-off-by: Johan Gunnarsson <johan.gunnarsson@arm.com>
Change-Id: I81b9d3c8a6dd820b1e5d747d754100282b93c641
|
|
- Adds 3 ops: Bitcast, BitcastXor, RightShift
Change-Id: Ia9721c69d4f3da0deba7526addb95a9a54e63adf
Signed-off-by: William Isaksson <william.isaksson@arm.com>
|
|
- Support for stride WxH 1x1
- Support for stride WxH 2x1 when IFM and KERNEL
is 1D shape with height 1
- Added test to supported operators
- Updated SUPPORTED_OPS.md
Change-Id: Ic1abead8399a5e14a78d962f8aded0d3b3dbfcc4
Signed-off-by: Johan Alfven <johan.alfven@arm.com>X
|
|
Extend the error message of RecursionError when reaching default
recursion depth with instructions to use the "--recursion-limit"
option in Vela.
Change-Id: I5c92d49b99203268c4b988f421afe7013ac3511a
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
|
|
There are networks out there with Pool ops with filter (W, H) equals
IFM (W, H) equals stride (W, H). The stride is technically too large
for the NPU, but we can actually run these ops in the NPU since the
filter is large enough the window doesn't slide. To support these ops
we need to fix the stride so later checks don't put this op on CPU.
Change-Id: I8f0a46b26fb94ee76c33748589536cc5ba07ea59
Signed-off-by: Johan Gunnarsson <johan.gunnarsson@arm.com>
|
|
This convert is already done in the pass packing stage, but doing it
in the graph optimiser stage is better.
Change-Id: Ib9baa98d115cf88491ce39936972a93467a378ce
Signed-off-by: Johan Gunnarsson <johan.gunnarsson@arm.com>
|
|
- If a npu op is followed by a convolution op than runs on the cpu,
the optimized file ends up containing a duplicated tensor called _cpu.
Functionality wise not a problem but the graph will look strange in a
graph viewer.
- This error was introduced when removing duplicate weights
tensors but the above use case was not considered in that patch.
- The fix is to make sure that only the weight and bias tensor are
modified.
Change-Id: I576f13650f1f9d3d50a421ab7100fc8b5ab62657
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
* Using serialization_lib main branch to update statically copied
files sha 5f920211ac23393a7b98a0d358bfbfc3232d5c8f (v0.80.0)
* All files within the ethosu/vela/tosa are copied from that revision
* Note: hope to move to serialization_lib as a pip module in future
* Modified the ethosu/vela/{tosa_mapping,tosa_reader}.py to use
v0.80.0 TOSA FlatBuffers implementation
* These are the additional changes made to support this new version,
with changes in the format of the FlatBuffers file and where various
values are stored. Either changing from input to attribute, or
moving to different attributes.
Signed-off-by: Rob Elliott <robert.elliott@arm.com>
Change-Id: I5e1fcc2a9964148619be3477adf1e88e84cbae2d
|
|
- Added release information
- Modified SUPPORTED_OPS.md version info
- Update README.md and classifiers in pyproject.toml to specify Python
3.10 as recommended and tested version
Change-Id: I78e5752846f261d4713b89c8efe447bcb9c095dd
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
|
|
- RSQRT is only defined for positive numbers and
therefore the zeropoint and actual input value
will have an impact
- Clamp the range to avoid crashing. As long as the actual
input is within valid range everything works. If the input
is not valid the reference will crash and not generating
any output
Change-Id: I1082b508d9cd85ad4b017e7b786cfff730585172
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- now only converts array directly if ndim==0
Signed-off-by: William Isaksson <william.isaksson@arm.com>
Change-Id: Id23e419bc7dd717f9694013180d4609819fd2f56
|
|
- npu_performance now uses write/read shapes instead of using ifm/ofms
for memory cycle estimations.
- also fixes a would be bug in the tflite_graph_optimiser, where one
read shape is not Shape4D.
Change-Id: I2067069a713d2cf9e65a5cc227e803de79940fff
Signed-off-by: William Isaksson <william.isaksson@arm.com>
|
|
PAD input tensor shape plus paddings must equal output tensor shape.
Change-Id: Icc5dea9bf6a8f6e1c8402f4d9af4d9796e8ef1aa
Signed-off-by: Johan Gunnarsson <johan.gunnarsson@arm.com>
|
|
- Documented High-Level and register-Level command stream options
- Changed High-Level command stream display to show the name of the
command
- Fixed an issue with some operators not being displayed by the
CLI option --verbose-operators
- Changed an unneeded print in pass packing to a more useful assertion
Change-Id: I9d53f19f4e32d0478209bc964724c27c935f66d6
Signed-off-by: Tim Hall <tim.hall@arm.com>
|