aboutsummaryrefslogtreecommitdiff
path: root/ethosu/vela/mark_tensors.py
AgeCommit message (Collapse)Author
2021-09-15MLBEDSW-5102 Update removal of memory only operatorsJonas Ohlsson
Memory only operators such as Reshape, Squeeze and ExpandDims are removed in the graph optimiser step. - Added semantic check that memory only operators have same quantisation parameters on ifm/ofm. - Added support for the ExpandDims operator. - Addition and cleanup of related unit tests. - Removed TOSA from the generated SUPPORTED_OPS.md documentation. Signed-off-by: Jonas Ohlsson <jonas.ohlsson@arm.com> Change-Id: If848d8afc58c18806e10997ed94e4dae83f30879
2021-04-08MLBEDSW-4334 Non-linear format decision in graph opt.Patrik Gustavsson
Check if non linear tensor format can be used is refactored. -Flag avoid_NHCWB16 replaced with needs_linear_format -Checking restrictions located to one function in graph optimiser. Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: Iec5c7996a1a6039cad052197f1ae56f7c0290440
2021-02-25MLBEDSW-4064: Update copyright headerserik.andersson@arm.com
All files which have been updated in 2021 and contain a copyright header have had their headers updated. Signed-off-by: erik.andersson@arm.com <erik.andersson@arm.com> Change-Id: Ia682111a719d16e690433398ccfb69c7e93c1cd1
2021-01-28[MLBEDSW-3891] Fix reading back in an ethos-u custom opFredrik Svedberg
Fixed assertion when reading back in an ethos-u custom op. Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com> Change-Id: I275ec9187ffead1e96f2522ecbd658328fa4ef69
2020-12-18vela: Move special error casesMichael McGeagh
Due to an issue with potential cyclical imports, especially when running individual parts of vela standalone for example with pytest, the specialised error functions are moved out of errors.py to their respective locations. The use of getattr over isinstance prevents the need to import the tensor/operator class causing the cyclical import issue. Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com> Change-Id: If8cee4b1a2562660c6a47e1c7aeb5d7fd4dd1fca
2020-11-17MLBEDSW-3493: bug fixes in mark_tensorsLouis Verhaard
None inputs and unsupported tensor shapes caused asserts when marking tensor purpose/format. Change-Id: I4498b61576f529c1a594341cfbb6ba278c6e7ec5 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-10-20MLBEDSW-3268: Refactor mark_tensorsLouis Verhaard
- Refactored mark_tensor_purpose - Initial weight compression is now always done in insert_dma - Removed mark_tensor_format Change-Id: Ic719b9bcd1d27e1390d7b9ce8cd21795139ec814 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-10-08MLBEDSW-3148: Refactor OperationLouis Verhaard
- op.type is now an enum instead of a string - Removed unused operator codes - Refactored some attributes like npu_block_type, fused_activation_function - Refactored operator index calculation - Refactored a number of operator sets Change-Id: I641f65ee375794b7aec42abc0664251ae37d78e8 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-10-02MLBEDSW-3060 Adjust check if weights fit in sramPatrik Gustavsson
When deciding if weights fit sram: A compression of the weights has been added when a weight compression test limit makes it impossible to fit weights in a double buffer in sram. The worst compression ratio from compression, is used to decide if weights can be fit in sram. Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: I9458769866b3f9fc15659185aae09658ed10fb38
2020-09-30[MLBEDSW-2802] Fix 5D tensor crashFredrik Svedberg
Fixed crash in networks with 5D tensors. Fixed crash for (int32) tensors without quantization. Added validity checks for concatenation. Moved unfusing of activation function from tflite_reader to graph_optimiser. Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com> Change-Id: Ib9ba8891dc95ef5491e15d0feedef44331a26393
2020-09-22MLBEDSW-2813: Handle non-const weights and check shapesAndreas Nevalainen
- Added check for non-constant weights in supported operators - Added check ifm & ifm2 shapes - Handle None tensors for CPU operators - Handle missing attributes for Cast operator Signed-off-by: Andreas Nevalainen <andreas.nevalainen@arm.com> Change-Id: I2f16d3d44d0c6da5237550b39273cdb9cc3c7607
2020-08-24MLBEDSW-2688: LeakyRelu rewrite to LUT or MUL/MAXLouis Verhaard
Replaces LeakyRelu operations with LUT activation function when possible, else to a combination of multiplication/maximization. Signed-off-by: Louis Verhaard <louis.verhaard@arm.com> Change-Id: I3d2eb2dba7145997c3cc711d0ef18ab355fbb416
2020-08-18MLBEDSW-2589: Skip weight compression for CPU opsDwight Lidman
This commit fixes a bug where CPU ops were getting passed on as NPU ops in weight_compressor.py due to Operation.find_npu_op() incorrectly returning any op with an 'npu_block_type' attribute (which every op has) as an NPU op. Signed-off-by: Dwight Lidman <dwight.lidman@arm.com> Change-Id: I7a758f8d1b1237907816bc1be7b77aff765ae688
2020-08-05[MLBEDSW-2335] SoftMax int16Fredrik Svedberg
Added graph rewrite of Softmax for int16. Change-Id: Id7885af6056a23e8b8362fb61ae94283251eb398 Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
2020-07-07MLBEDSW-2548: Fix for Double Buffer size estimateJacob Bohlin
This will give a worst case estimate of the Double Buffer size in the Scheduler and it will no longer be able to choose strategies that end up with a buffer that doesn't fit in SRAM. Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com> Change-Id: I763731f63c7672679f3b8cd6db65dad03b946ae5
2020-06-25MLBEDSW-2306 Added more supported mem-cfgsPatrik Gustavsson
Additional supported memory configurations: -Permanent_storage = DRAM -Tensor arena either in DRAM or SRAM Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: I20beb7151e306bfdba540e7c0b2a7b478b4d94e1
2020-06-18MLBEDSW-2528: MLCE-219: Custom operator pass throughTim Hall
- Fixed custom operator pass through - Added error printing functions for operators and tensor - Minor cleanup of custom exception handling Signed-off-by: Tim Hall <tim.hall@arm.com> Change-Id: Idf295df1e4c544381dc480244d880c32fb285e38
2020-06-18Code clean-up using black and flake8Tim Hall
- No functional change Signed-off-by: Tim Hall <tim.hall@arm.com> Change-Id: I5ab1198b9d092cd041fa9b85b2dee9900d299bfc
2020-06-18vela: Fix tensor purpose for some CPU only opsTim Hall
- Add support for marking the tensor purpose of CPU only ops such as LESS which mark their input based upon their output Signed-off-by: Tim Hall <tim.hall@arm.com> Change-Id: Ia7898089f0b18ccd4f183e2ef961a67f4d169e4c
2020-06-18MLBEDSW-1716: Transpose Convolution supportJacob Bohlin
Change-Id: Ie6d8d6de9f3447f19ba06aafa9fa480fc96a973b Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com>
2020-06-18MLBEDSW-2420: Improved support for dilated convolutionLouis Verhaard
- Dilation added to SET_KERNEL_STRIDE instruction - Kernel height/width adjusted for dilation - Updated padding calculation - Updated weight compression Change-Id: I0c8190223e223b039a305aba0f37896ae1de2b80 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-06-18MLBEDSW-1941: Bug fix shared weightsLouis Verhaard
If same weight tensor was used with different block configs, errors would occur. Fixed by always cloning weight tensors, using a global weight compression cache and modifying the linear allocator to detect multiple usage of same weight compression. Change-Id: I91ca59176e1c59c66e0ac7a4227f2b5f0b47053f Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-06-18MLBEDSW-2002: Fix Reshape's output MemAreaLouis Verhaard
A Reshape operator's input and output tensor point to same data, thus have the same mem area. Change-Id: Ice830f83da78103d54b5f72f5bfc1e6ffa8636c3 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-06-18Add reorder-python-import pre-commit hookDiego Russo
Also updated README.md Change-Id: I118309c61f4d00e8508d6b888c606995490fba39 Signed-off-by: Diego Russo <diego.russo@arm.com>
2020-06-18Add pre-commit support for sanity checksDiego Russo
Use pre-commit framework [1] to run black and flake8 before the commit. black and flake8 are managed by the pre-commit framework and they can be run manually by the user using `pre-commit run` command. Fix the code base with the help of black and flake8. Fix import statements according to PEP8 guidelines [1] Both tools have the following settings (specified in the pre-commit configuration file): * line length: 120 characters * directory to exclude: ethosu/vela/tflite/ and ethosu/vela/ethos_u55_regs Updated README.md on how to install pre-commit and how to run sanity checks. Pipenv files have been updated including new dependencies for pre-commit. [1]: https://www.python.org/dev/peps/pep-0008/#imports [2]: https://github.com/pre-commit/pre-commit Change-Id: I304d9fffdf019d390ffa396a529c8a7c2437f63d Signed-off-by: Diego Russo <diego.russo@arm.com>
2020-04-29Add Vela codebase0.1.0Tim Hall
- Added modules ethosu.vela and ethosu.mlw_codec. - Added README and various configuration files. Change-Id: I3690f8c8f5966306ecddaeb2793c30ca9c6e2eee