aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
4 daysMLBEDSW-8969: Enable weight buffering for fully connected with batch shapeHEADmainJohan Alfven
- Fully connected with batch shape will use the weights more than once. Models with these type of fully connected will benefit from weight buffering. - If a fully connected op with this shape is detected it is changed to a conv2d and the normal weight buffering flow will be used. Change-Id: I272741a32390e036d5e04bd5af41d4538162e86e Signed-off-by: Johan Alfven <johan.alfven@arm.com>
4 daysMLBEDSW-8973: MLCE: Fix assert in build pass linksJohan Alfven
- Assert in build pass links due to that a concat op is split into several avg pools op which run in different custom ops. The code did not expected the pass to have a dependency to itself. - Fixed the assert to handle this special case Change-Id: Id03b1145b19c25bf967a1061aa5ecf559b3bc1cc Signed-off-by: Johan Alfven <johan.alfven@arm.com>
5 daysReformat code to align with precommitPer Åstrand
Signed-off-by: Per Åstrand <per.astrand@arm.com> Change-Id: Idc6f6959bfc7eabce2f5b6e0d4935d292dcf6618
2024-04-12Reshape weights from TOSA to Vela expected formatPer Åstrand
Reshape the weight for depthwise conv2d and set the depth_multiplier attribute on the operation. Signed-off-by: Per Åstrand <per.astrand@arm.com> Change-Id: I3b73988fa8c4e0cbe2430874cefe6d002885ec89
2024-04-12Fuse rescales into Add and Conv2d operationPer Åstrand
Remove the upscale to int32 before and after the the add operation. Re-enable fusing of conv2d and rescale that was removed earlier. Signed-off-by: Per Åstrand <per.astrand@arm.com> Change-Id: I5e7d9bd99bb3925588b507824d8eb3e6642cc7f0
2024-04-05Fix various pre-commit errorsJohan Alfven
Change-Id: I8e584a036036f35a8883b2a4884cb2d54e675e39 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2024-04-05MLBEDSW-8885: MLCE: Fix assert in verify_subgraph_healthJohan Alfven
- Assert triggered due to that the tensor consumer list did not contain expected operators. - The problem happened because a concat op was split into two avgpool ops and these two ops run in separate subgraphs with a cpu node in between. Since the avgpool ops share the same output tensor this caused some corruption to the tensor consumer list when the last subgraph was traversed. - The fix is to ignore ops that do not belong in the subgraph's set of operators (the pass list) when updating the consumers. Change-Id: I4d94b54c77001f6447bec31ec62daeebc9b104f9 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2024-04-04MLBEDSW-8886: Regression: Output diff on LSTMJohan Alfven
- Fix regression caused by too strict constraints on SplitSpliceRead causing output diff for LSTM. - As long as the SplitSpliceRead shape fits within the consumer ifm shape it is ok to move the read. Change-Id: Ia6f508f99638c3aedccc7fd9f31405527bb64f87 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2024-04-03MLBEDSW-8875: MLCE: Update criteria when to move SplitSpliceRead to consumerJohan Alfven
- When possible, a read slice from a split or stride is moved to the following op. The problem in this case was that the following op was a Maxpool op (from Softmax). The Maxpool op is using a different input shape compared to the original Softmax op, and this input shape was then changed when the read slice was applied to the Maxpool op. - The result is a faulty Maxpool op with an output diff. - The fix is to prevent moving the slice read when the consumer input shape differs from the Split/Stride ofm shape Change-Id: I649d89c38645fa51c20c3602954e2b8af9372076 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2024-04-03MLBEDSW-8873: MLCE: Update LUT index calculationJohan Alfven
- A network containing several softmax operators caused an output diff - The problem was that the code that detects if the LUT is already in internal SRAM calculated everything correctly except for which lut index to use. - The code should use the slot_size and not then LUT size when calculating the index which fixes this problem. - Updated unit tests Change-Id: I07686651a883ccbba7c173e7191eb21f9ff15bf5 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2024-04-02MLBEDSW-8672: Add ext_key trackingWilliam Isaksson
- Add ext_key tracking. - Fix debug db cmd offsets being off by 4. Change-Id: Ib109a15a0a2c44d08021c3b1bc3bcc067240ac5c Signed-off-by: William Isaksson <william.isaksson@arm.com>
2024-03-12MLBEDSW-8725: Remove scales & biases from --verbose-weightsAlexander Bengtsson
Remove scales and biases from encoded weights size. This aligns better with original weights size (which only represents the weight tensor) Change-Id: I5aabf61385d8fdf150764c45e04ba4388c6a63f0 Signed-off-by: Alexander Bengtsson <Alexander.Bengtsson@arm.com>
2024-03-07TOSA fixesOscar Andersson
- Fix TOSA imports - Handle weights connected to Identity nodes - Scaling info was missing in Fully Connected - Disable rescaling fusing for conv-like ops - Explicit scaling was missing for conv-like ops - Handle Const->Identity->Transpose chains - Handle Const->Identity->Reshape chains Change-Id: I063af1f187b6b56105ccf5e8e8b2eb0d3a39dd3b Signed-off-by: Oscar Andersson <oscar.andersson@arm.com>
2024-03-06MLBEDSW-8749: MLCE: Output diff on strided sliceJohan Alfven
- When possible, a read slice from a split or stride is moved to the following op. The problem in this case was that the following op was an elementwise op where the ifm needed to be broadcasted and that is not supported. - The result is a faulty elementwise op with an output diff. - The fix is to prevent moving the slice read to the elementwise op if broadcasting is needed. Change-Id: I89928c217510a822f91f051fd1ad6e34040c19de Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2024-02-28Fix stats writer exception when op has no tensorsSimon Hollis
Signed-off-by: Simon Hollis <simon.hollis@meta.com> Change-Id: I5553802afaa3faaa2548aece7a3e0e1530021765
2024-02-27Modifications of rescale to enable basic form quantized network support.Rob Elliott
Minor fixes for TOSA 0.80.0 and 0.80.1 field naming following from the 0.2 to 0.8 conversion. Change-Id: I2ac1b3ac1ec60cf765edf54030cd2338bf001289 Signed-off-by: Rob Elliott <Robert.Elliott@arm.com>
2024-02-19MLBEDSW-8704: Update release notes3.11.0.rc23.11.0Rickard Bolin
- Added release information Change-Id: I6d6d80460658d444d52d0abb17a2cb42954f992c Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
2024-02-09MLBEDSW-8674: int16 VectorProduct should use Natural rounding3.11.0.rc1Johan Alfven
- Fixed output diff for FullyConnect int16 - Problem was that wrong rounding mode was used - Reference uses Natural rounding for FullyConnect int16 Change-Id: I209313b6f89fed01678a448a935d5f6904b41057 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2024-02-06MLBEDSW-8620: Fix MirrorPad supported ops checkRickard Bolin
Change-Id: I1458009f4b92c1a599efa3a63d6768148e55606d Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
2024-01-30MLBEDSW-8491: Add support for Mirror padRickard Bolin
Change-Id: I3c13118e14195a5fb8e522a38b205b75fb07b74b Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
2024-01-30MLBEDSW-8569: MLCE: Reported number of CPU ops are wrongJohan Alfven
- A Pack op is implemented by several AvgPool ops. Depending on number of CPU ops and graph topology this could result in that the AvgPool ops ended up in different nodes. One of these node had the Pack output referenced to it but the other node did not. As a result the full graph was not traversed when calculating CPU ops. - The compiled network works as intended but the number of reported CPU was wrong. - Added new method that extracts the ops using the passes in the sub graphs which fix the problem. Change-Id: Ie88ebd4669783559258ae763737a4c7f86c905f8 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2024-01-26MLBEDSW-8575 Tests fails on conv networksFredrik Svedberg
Fixed a problem where the compiler incorrectly called the mlw_codec to create an empty weight stream for the second weight core. Also added code to the mlw_codec to detect this as an value error rather than a memory error. Change-Id: I463846cecb1178f8fbf04dc3e39bd6965cb8ddfc Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
2024-01-26vela: Remove unnecessary code from architecture allocatorTim Hall
- Small improvement that reduces compilation time Change-Id: I9e5cd58674f719f5dedeb30ea42787dc996a22d6 Signed-off-by: Tim Hall <tim.hall@arm.com>
2024-01-26Revert "MLBEDSW-8468: overlaps_ranges does not treat the live range end time ↵Tim Hall
as inclusive" This reverts commit dbe4df4ccddafac9cbc345a4a03a42c241248e88. - The previous patch had a mostly negative effect on performance Change-Id: I4003d50b07de9c63d9001ceb0a3a0bc966c0b861 Signed-off-by: Tim Hall <tim.hall@arm.com>
2024-01-26vela: Remove dead code from register command streamTim Hall
- Removed the unused function get_block_config_for_npu_op() Change-Id: If36e4fe65286c4e13e127473d20971a1b6eaa94b Signed-off-by: Tim Hall <tim.hall@arm.com>
2024-01-24MLBEDSW-8568 Fix mlw_codec memory handlingFredrik Svedberg
Added missing memory allocation checks to mlw_codec. Change-Id: I20c04d5d9c934b9c715a2b2049705f853d90825a Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
2024-01-18CONV ops int16 tests failed after TensorFlow updateWilliam Isaksson
Adds support for setting the accumulator type using the quantized_bias_type attribute Change-Id: Ibde1149143b510a1c650a5a037d3ab92d878d7cd Signed-off-by: William Isaksson <william.isaksson@arm.com>
2024-01-16MLBEDSW-8468: overlaps_ranges does not treat the live range end time as ↵Tim Hall
inclusive - The issue is that live range start and end times are inclusive but the function to calculate is two ranges overlap treats them as exclusive - The fix is to change the comparison to be inclusive Change-Id: Iab5ceec7be2a5fdf0d6ecef81509a88c74e7108c Signed-off-by: Tim Hall <tim.hall@arm.com>
2023-12-22MLBEDSW-8497: [MLCE] Avoid modifying FC with dynamic weightsJohan Alfven
- If a npu op is followed by a convolution op with dynamic weights the optimized file ends up containing a duplicated tensor called _cpu. - Another problem is also that an empty bias tensor is added in the reader. - The fix is to ignore these cpu ops both in the reader and the writer. Change-Id: I476b4f6062e26cca4ba589df694a99ef79b0f6d4 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2023-12-20MLBEDSW-8157: Update to TensorFlow 2.15William Isaksson
Updates to TensorFlow 2.15. No StableHLO operators were added to Vela since these are subject to change and have almost no runtime support. - FlatBuffers version was unchanged. Change-Id: I9a506a2dcc2e0bc2498742e857bbb6d69b19ac1b Signed-off-by: William Isaksson <william.isaksson@arm.com> Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
2023-12-19MLBEDSW-8467: verbose-allocation memory usage is incorrectTim Hall
- The issue was that the peak memory usage was only evaluated at the start of the tensor's lifetime and not across its whole lifetime - The fix is to look for the maximum usage between start and end Change-Id: Iff4f390f3a017f1df0f8933796fa5282db7870db Signed-off-by: Tim Hall <tim.hall@arm.com>
2023-11-21MLBEDSW-7871: Document new error types in APIWilliam Isaksson
- Documents Legality requirements of CMD1 payloads - Fixes a miss in the command stream checks. Signed-off-by: William Isaksson <william.isaksson@arm.com> Change-Id: I9b33dedfa66650fa3100f61fd158a385818b4d52
2023-11-16MLBEDSW-8109: Update release notes3.10.0.rc23.10.0Tim Hall
- Added release information - Modified SUPPORTED_OPS.md version info Change-Id: I3ead55db45c84821c426645e488dfb765166d20f Signed-off-by: Tim Hall <tim.hall@arm.com>
2023-11-16MLBEDSW-8240: Document reference comparison pointTim Hall
- Updated TensorFlow Support section Change-Id: Ic2551f44e7dfa996a5dcc8840d480b7985415a0a Signed-off-by: Tim Hall <tim.hall@arm.com>
2023-11-16MLBEDSW-8280: Update PyPI homepage linkTim Hall
- Changed homepage link from cgit to gittiles - Clarified tensor alignment is in Bytes Change-Id: I9fd912c17d61f9add11493e031bbb620271c68eb Signed-off-by: Tim Hall <tim.hall@arm.com>
2023-11-16Vela: Update from using deprecated pkg_resourcesTim Hall
- Changed deprecated method of getting package version info - Updated pylint version to be Python 3.11 compatible Change-Id: I68aae2155098c834653d404c78acf8df86eb88f8 Signed-off-by: Tim Hall <tim.hall@arm.com>
2023-11-15MLBEDSW-8336: MLCE: Update example for CPU Tensor AlignmentJohan Alfven
- Updated example to --cpu-tensor-alignment in OPTIONS.md Change-Id: Id0b74a9aac4dd4384a4b7c74eea743c29c3c8e5e Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2023-11-15MLBEDSW-8326: MLCE: Update constraint message for AVERAGE_POOL_2DJohan Alfven
- Added missing constraint message for stride height by adding the constraint_stride_width_no_upper_limit to AVERAGE_POOL_2D Change-Id: Ib716fb19e44cb8735b52270b557998d4cbf5cb1c Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2023-11-13MLBEDSW-8317: Add semantic checks for TransposeJohan Alfven
- Added semantic checks for Transpose - Added unit tests for semantic checks - Updated SUPPORTED_OPS.md Change-Id: I3fcf13120f4b6811f8de27711996cdb9c19c9847 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2023-11-09MLBEDSW-8290: MLCE: Add TRANSPOSE support3.10.0.rc1Johan Alfven
- Added graph optimiser function to convert TRANSPOSE op into an AvgPool op with swapped stride for height and width - Added TRANSPOSE supported op check - Added unit tests for TRANSPOSE supported op check - Updated SUPPORTED_OPS.md - Fixed problem in pass packing when optimizing the pass list. Old problem, but now seen when moving TRANSPOSE from cpu. Change-Id: I0a0ef420b0fb8241090c2e2434622881105cde15 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2023-11-06MLBEDSW-8261: Fix regression on AvgPoolJohan Alfven
- When adding extended stride support for CONV_2D a regression was introduced for AvgPool causing an output diff for a particular test case. - The reason was that the logic for forcing the zero point to zero when generating the cmd stream did not have a check for explicit padding. - Updated logic to also include check for explicit padding. Change-Id: Iee4893a83a05279e592fe230f4d66d9c9ddb3e05 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2023-11-02MLBEDSW-8117: Incorrect stride check for IFM/IFM2 and OFMBjörn Davidsson
The constraint check for the IFM/IFM2/OFM strides were coded according to an incorrect version of the specification. Changed the check to verify that the strides are a multiple of 16 bytes. Also changed the wording in the exception message to clarify if it is a stride or value violating the constraint. Test case had two stride settings violating the constraint, after this change one of them still fails the check, so no change to tests, except in comments clarifying what is being tested. Change-Id: I93815d8bb08303b5f747c947c0bbd461b12895e3 Signed-off-by: Björn Davidsson <bjoern.davidsson@arm.com>
2023-10-31MLBEDSW-8219: Activation can not be fused with dma operationJohan Alfven
- A reshape followed by an activation function was converted to a Memcpy with fused activation. The problem is that Memcpy does not support activation so no activation was executed. - Added logic to prevent activation functions to be fused with the Memcpy. Change-Id: Ibc7d985e5037146dd1f6cb2601407d0f8b865ac6 Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2023-10-31MLBEDSW-8201: [MLCE] Extended stride support for CONV_2DJohan Alfven
- Added support for stride_h > 3 when ofm height is 1 - Added support for stride_w > 3 when ofm width is 1 - Updated constraints - Updated tests - Updated SUPPORTED_OPS.md Change-Id: I8f89909b05a0f052df5f03702966cee50da61cfc Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2023-10-30MLBEDSW-8156: Update max_outstanding_kernels to 2Rickard Bolin
Update max_outstanding_kernels to 2 and remove unit tests expecting values of 2 or 3. Change-Id: Ib8a3a88d3378d3ce84427935c91c7a46f04bc9ab Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
2023-10-11MLBEDSW-8111: Update to TensorFlow 2.14Rickard Bolin
- Update to TensorFlow 2.14 and minimum required Python version to 3.9. - Update version pins on NumPy and FlatBuffers. - Add constraint to Offset attribute of StridedSlice operator Change-Id: I8c7122def963202e5f47e92b62be607935ed05cf Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
2023-10-10MLBEDSW-7853: Missing options for RANDOM_UNIFORM operatorRickard Bolin
The operator mapping for the RANDOM_UNIFORM operator was missing the seed and seed 2 options which resulted in those options being removed when the operator was passed through Vela. Change-Id: I8469c239ec1d20d775c31a52e4954baf159643f2 Signed-off-by: Rickard Bolin <rickard.bolin@arm.com>
2023-10-05MLBEDSW-8064: Update Markdown URLsJohan Gunnarsson
Markdown's git reporitory has moved to different location. Change-Id: Iae401c1d283d937347cbce546836470647333201 Signed-off-by: Johan Gunnarsson <johan.gunnarsson@arm.com>
2023-10-03MLBEDSW-8102: Fix regression on Argmax int64Johan Alfven
- Fixed a regression where DepthWiseConv used in argmax int64 had the wrong shape. - The error was introduced when adding support for a new operator that changed the weight shape for the cast utility function. That change only worked because reorder_depthwise_weights was called later. Since argmax is converted after reorder_depthwise_weights the cast operator in argmax got the wrong shape. - The fix is to set the correct weight shape in the cast operator and then mark that the weights already have been transposed correctly. Change-Id: I61f5694f078cfcaf0d46d43faead6eb7e0a23ade Signed-off-by: Johan Alfven <johan.alfven@arm.com>
2023-09-18MLBEDSW-8052: Update FlatBuffers version pin in pyproject.tomlWilliam Isaksson
Update to 23.1.21 Change-Id: I2a9aaa7cbb725c2f417b87577a1f8d6ad4697d76 Signed-off-by: William Isaksson <william.isaksson@arm.com>