Age | Commit message (Collapse) | Author |
|
- Fixed an issue with the fusing of PAD and AVERAGE_POOL_2D whereby
the rounding away from zero didn't work because it requires the zero
point to be at zero but the input padding required it to be set to the
desired zero point. This affected both int8 and int16. The solution
was to remove it by using the bias prior to the scaling
- Refactored the rounding away from zero mode
Change-Id: I8f2df69df06d2a9722315c346646e5a901cb2c3b
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
Change-Id: I50b85953bff13bd6ec0648dec5d86b8ac749137a
Signed-off-by: Raul Farkas <raul.farkas@arm.com>
|
|
- Weights are internally cloned and reshaped/transposed when
running on the NPU. This happens already in the reader. If
the op is passed through to the CPU there are code that writes
backs these clones but with another round of reshape/transpose.
This adds extra tensors in the optimized file compared to the
original file if the original tensors are subgraph inputs.
- If the op is passed trough to the CPU the clones should not
be written to the file. Solved this by setting the src_tensor
when making the clone.
Change-Id: I9f55d542c099882882920bffe8e15b43b2ca2c8d
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- Quantization for the OFM was added for the ArgMax operator
as a workaround in order to avoid a crash in the weight compressor.
This quantization is now removed.
- The weight compressor expects that all tensors have a quantization.
Updated code to use scale = 1.0 and zero point = 0 for tensor without
quantization.
Change-Id: I6816dce2db55f7d795d19f88d7fbe7ee419347fc
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
Remove op_index constraint and force linear format for all Conv2D that
have strides that can be optimised.
Change-Id: Idef3508ab074ea9abeacac030eaaa15a00ad1211
Signed-off-by: Raul Farkas <raul.farkas@arm.com>
|
|
Refactoring move_constant_data in the scheduler. The use case currently
only work for LUT tensor, so simplifying the logic. In order to make it
work for other tensors one would also have to take into consideration
memory usage when building cascades and also the
use_fast_storage_for_feature_maps would be effected.
Change-Id: Ic8de53b65a2c17d34515002d7f184d0ab1830222
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- The problem was that networks with resource variables have
not been thought of. The major problem was the graph traversal
where these ops were not visited resulting in an empty subgraph
that resulted in the crash.
- Fixed the problem by attaching virtual tensors to the ops simulating
subgraph output. These tensors are only used to get the graph
traversal to work.
- Fixed serializing of attribute container and shared_name
- Fixed subgraph index for operator CallOnce
- All resource variable ops are pushed to the CPU
Change-Id: I815f9c81baf7a3fbb686e895980b462f58208b6e
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- The issue is due to undefined behaviour when casting a NumPy float
to a NumPy unsigned integer which occurs in create_const_tensor()
- The fix is to make sure that the values are first cast to a Python
float
- In addition, the values datatype argument has been removed from
create_const_tensor() to stop the tensor and values datatypes getting
out of sync
Change-Id: I134b9be8c941b361929a5ae7db8cb35f2e9728f2
Signed-off-by: Tim Hall <tim.hall@arm.com>
|
|
- Update copyright notices to use SPDX format and add OSS mail as contact.
- Update years on files where it had been missed.
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
Change-Id: I7e9715ea4e17b76252728c708e46df12ad67ab1f
|
|
- A bug was introduced by using the original_shape attribute that
causes CPU CONV2D ops to fail to run due to an incorrect weight
tensor shape
- This was due to the original_shape not being modified when a
transpose was performed on the weight tensor
- The fix was to transpose the original_shape just like the current
shape
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: Ied72316463d26c502cf931b9dd5784041c42ab66
|
|
- CPU side always needs to work we the original tensor shape.
Due to a bypass memory optimization the IFM, produced by CPU,
was stored with the wrong shape in the optimized file.
- Store the original tensor shape so it can be correctly
written to the optimized file.
Change-Id: I666dbcb0acd806ad208c0f925a51dfc25421688b
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
|
|
- Refactored erroneously if statement that allowed illegal
swapping between ifm1 and ifm2 for elementwise operators.
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
Change-Id: Iec571f710824432edac9104d960f199f33a1b241
|
|
Make the address_for_coordinate function a bit easier to read
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
Change-Id: I854e1643a39108edc8b1de95198d30a1891fdfd1
|
|
- Added support for Resize Bilinear with half pixel centers for int8 and
uint8.
- Utilizes the new "TILE" padding mode.
- Utilizes ofm stride multipliers and modified tile base offsets to
write OFMs interleaved.
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
Change-Id: I37fa77c022a368f05fda0ead75d8696c9205f833
|
|
Allow sparse writing of OFM by multiplying H/W/C of the OFM with the
values of ofm_stride_multiplier
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
Change-Id: I65d742ad36ad3154e9914cdd22e2da928ad1f095
|
|
Implement new padding mode which pads two edges of the IFM with the
current values of those edges
Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
Change-Id: I8523e0cabdac80b48710703859003e33050cc150
|
|
*Added generic function which checks if underlying shape of
FullyConnected operation is 2D and performs shape reduction
*Fully connected operation >2 dimensions now run on NPU if the above
case is satisfied
*constraint_fc_output_2d and rewrite_fully_connected_input refactored
*Added unit test to confirm this functionality
Signed-off-by: Ayaan Masood <Ayaan.Masood@arm.com>
Change-Id: I0e29c767e5b84841eb53bbc44464b36a454f7b38
|
|
- Changed comments to docstring on QuantizationParams
- Simplified op type to op name conversion
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I2fdf5922cc17944c9bd37917a85fdfe50a1e651d
|
|
- Fixed a bug due to ResizeBilinear modifying the attributes of a
shared IFM
- The ifm_resampling_mode is now an attribute of an operator rather
than a tensor
- Changed all calls to try_block_config() to use the attribute rather
than recalculating it in multiple places
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I4641e9cd6b049bd4186776d98e3e751c5e5bcc06
|
|
Add mypy to pre-commit and clean up all reported errors.
Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com>
Change-Id: If7dc869f5fecdb0e2db40f14e7d9db21aa33df71
|
|
This change will allow the subgraph's input tensor
to be reused/overwritten by the output from an elementwise op
if there is only one consumer attached to the input tensor.
Signed-off-by: Johan Alfven <johan.alfven@arm.com>
Change-Id: I317188af11a5470614770e18dc8973462fd5f21c
|
|
Change-Id: I645496536a6bddf2bd289a87be9d7cef11693954
Signed-off-by: Diqing Zhong <diqing.zhong@arm.com>
|
|
Fixed by adjusting zero points for ops with int8 IFM and asymmetric weights
since the reference does not support asymmetric weights for int8 IFM and
ignores the zero points.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: I2a206a01a471a53aa864a6a3616aa23d2a5a23c8
|
|
This commit fixes a number of bugs where per-axis
quantization would make Vela crash and would not
be properly recognized.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: I50a461d200274b43ec76f3a7357bf66db6d49964
|
|
Added support for elementwise operations:
-Support for up to Rank == 6
-Support for Batch > 1 for Rank == 4
-For binary elementwise ops this includes handling
of broadcasting in dimensions above H-dimension
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I73850bbfb288077a99bd2ceecbf989172016da24
|
|
This commit fixes one assert regarding rolling buffers for 3D tensors.
It also addresses another issue where the incorrect weight buffering was
proposed for cascaded operators.
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com>
Change-Id: I2501f35e5668b3085d917751cfc8002d250973d8
|
|
Bug fix in cascade builder: tensors produced with operators requiring full OFM
or consumed by operators requiring full IFM could be added as intermediate buffers
to a cascade.
Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
Change-Id: Id84e9e1940bf85ab4cbc42a03e65f64da16a094c
|
|
Remove quant_values attribute from Tensor class.
It only needs a single values attribute, holding either
quantized or unquantized values as appropriate.
Change-Id: Ie96f80ac58061b6077e0f7048dc60209fdfbcafa
Signed-off-by: James Peet <james.peet@arm.com>
|
|
- Merged dev/scheduler at 83639f90e8c828f70de6e29142355a940224959b
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: I0050529d4b42da93768c7264296434dd877fb5b4
|
|
Check if non linear tensor format can be used is
refactored.
-Flag avoid_NHCWB16 replaced with needs_linear_format
-Checking restrictions located to one function in graph optimiser.
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: Iec5c7996a1a6039cad052197f1ae56f7c0290440
|
|
This commit adds support for emulating the behavior
of the QuantizedMeanOrSum implementation of MEAN in
TensorFlow Lite.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: Ifd24e0e678e2f85cd66ab82deeaaf010d5351b1e
|
|
Fixed pass through of LSTM operator.
Change-Id: I23140c69ab6cdc83f6bb8129256b4cc6a7c5ffac
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
|
|
When running specific networks containing LeakyReLU operators, Vela would crash when cloning an ofm of a LeakyReLU operator.
In this procedure a deepcopy usage would try to copy an OperatorInfo object, which caused an error.
This was fixed by replacing the deepcopy usage with a copy and then manually referencing new instances of sensitive variables.
Signed-off-by: erik.andersson@arm.com <erik.andersson@arm.com>
Change-Id: I46917858896fbdf52245dac6c6d9c18bc7ecdd0d
|
|
-Removed reshapes in the original graph
-Removed the addition of reshapes to the
optimized graph
-Reshapes with different ifm/ofm quantisation will remain
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I94862be53dac0d7434815e2aee5ca678228495f8
|
|
Fixed assertion when reading back in an ethos-u custom op.
Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
Change-Id: I275ec9187ffead1e96f2522ecbd658328fa4ef69
|
|
This reverts commit df0a5905177f3a1b836076bc3f9f39b2e86f1794.
Reason for revert: <INSERT REASONING HERE>
Change-Id: I891c66fb29db9d25e942947e8d1c29a10610de51
|
|
This reverts commit bf31d647dc5df47410ee577b12427ddf076d816b.
Reason for revert: <INSERT REASONING HERE>
Change-Id: I7b6c585b7658f94dbaa916c2b6bfe9fb463b8d37
|
|
Add 4D shape class for op Ifm/ofm shapes
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: Ic0a98da9d2f9d085605e39a9ab5a26bad6e702a3
|
|
Add ifm/ofm shapes to op
Changed to rely on these shapes
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: I571535a1dcadc2bdb04a3c727a8e1c49703b174d
|
|
Due to an issue with potential cyclical imports, especially when running
individual parts of vela standalone for example with pytest, the
specialised error functions are moved out of errors.py to their
respective locations.
The use of getattr over isinstance prevents the need to import the
tensor/operator class causing the cyclical import issue.
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: If8cee4b1a2562660c6a47e1c7aeb5d7fd4dd1fca
|
|
Added __lt__ for Tensor to avoid errors when sorting tensors.
Change-Id: I19bb591ef17aa0d4a3389da411bd8863c2218d55
Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
|
|
Signed-off-by: Diqing Zhong <diqing.zhong@arm.com>
Change-Id: I4a5c53d0c5957595fc639b174b2b227ea043d409
|
|
Minor refactoring to use fstrings.
Improve Error classes to correctly inherit the base class.
Use existing exception classes instead of plain exceptions where it
makes sense.
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: I0941c04e91010da1db77299517a8e2d896371e77
|
|
Change-Id: I1b35e039f43471cc0f61cb46ed4d5aff5469d11d
Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
|
|
Replace conditional checks against sets with tuples.
If not requiring uniqueness, or complex set operations, it is quicker to
use tuples instead.
Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com>
Change-Id: Ie8732c8d46067244963936c53f0ec81adda50372
|
|
Vela only supports per-channel scaling for
convolution ops. This commit adds a check that
puts ops with per-channel scaling on the CPU.
A caveat worth mentioning is that neither
TensorFlow Lite or TensorFlow Lite Micro support
per-channel scaling for the CPU placed op,
however the problem is moved away from Vela.
This commit also changes a small utility function
in supported_operators.py used for docstring
formatting.
Signed-off-by: Dwight Lidman <dwight.lidman@arm.com>
Change-Id: I9ed090592f1d05dd4566d3e54dba1ef405299383
|
|
- Improved tensor and scaling query functions
- Fixed bug in convert_batched_fc_to_conv
Signed-off-by: Tim Hall <tim.hall@arm.com>
Change-Id: Ibc3d14036540f27cf5e993beb2163d3e0f5e5933
|
|
None inputs and unsupported tensor shapes caused asserts when
marking tensor purpose/format.
Change-Id: I4498b61576f529c1a594341cfbb6ba278c6e7ec5
Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
|
|
For IFM streamed cascades bias tensors are read several times.
Moves these tensors to fast storage and add DMA commands.
Change-Id: I630f6275986c1b5e3f126c925b11e22500fb1128
Signed-off-by: Andreas Nevalainen <andreas.nevalainen@arm.com>
|
|
Changed so that there is an option to set if Tensor clone should be
seen as unique or not.
Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com>
Change-Id: Ie51c1a5e84b535380d498b105aa18ccba1c8b27c
|