aboutsummaryrefslogtreecommitdiff
path: root/ethosu/vela/tensor.py
AgeCommit message (Collapse)Author
2021-02-17[MLBEDSW-3813] Fix LSTM operator pass throughFredrik Svedberg
Fixed pass through of LSTM operator. Change-Id: I23140c69ab6cdc83f6bb8129256b4cc6a7c5ffac Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
2021-02-12MLBEDSW-3902: Fixes invalid op when cloning LeakyReLU operatorerik.andersson@arm.com
When running specific networks containing LeakyReLU operators, Vela would crash when cloning an ofm of a LeakyReLU operator. In this procedure a deepcopy usage would try to copy an OperatorInfo object, which caused an error. This was fixed by replacing the deepcopy usage with a copy and then manually referencing new instances of sensitive variables. Signed-off-by: erik.andersson@arm.com <erik.andersson@arm.com> Change-Id: I46917858896fbdf52245dac6c6d9c18bc7ecdd0d
2021-01-28MLBEDSW-3772 Reshape removalPatrik Gustavsson
-Removed reshapes in the original graph -Removed the addition of reshapes to the optimized graph -Reshapes with different ifm/ofm quantisation will remain Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: I94862be53dac0d7434815e2aee5ca678228495f8
2021-01-28[MLBEDSW-3891] Fix reading back in an ethos-u custom opFredrik Svedberg
Fixed assertion when reading back in an ethos-u custom op. Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com> Change-Id: I275ec9187ffead1e96f2522ecbd658328fa4ef69
2020-12-21Revert "Revert "MLBEDSW-3645 4D class for op ifm/ofm shapes""patrik.gustavsson
This reverts commit df0a5905177f3a1b836076bc3f9f39b2e86f1794. Reason for revert: <INSERT REASONING HERE> Change-Id: I891c66fb29db9d25e942947e8d1c29a10610de51
2020-12-21Revert "MLBEDSW-3645 4D class for op ifm/ofm shapes"patrik.gustavsson
This reverts commit bf31d647dc5df47410ee577b12427ddf076d816b. Reason for revert: <INSERT REASONING HERE> Change-Id: I7b6c585b7658f94dbaa916c2b6bfe9fb463b8d37
2020-12-21MLBEDSW-3645 4D class for op ifm/ofm shapesPatrik Gustavsson
Add 4D shape class for op Ifm/ofm shapes Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: Ic0a98da9d2f9d085605e39a9ab5a26bad6e702a3
2020-12-18MLBEDSW-3654 Add/use op ifm/ofm shapesPatrik Gustavsson
Add ifm/ofm shapes to op Changed to rely on these shapes Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: I571535a1dcadc2bdb04a3c727a8e1c49703b174d
2020-12-18vela: Move special error casesMichael McGeagh
Due to an issue with potential cyclical imports, especially when running individual parts of vela standalone for example with pytest, the specialised error functions are moved out of errors.py to their respective locations. The use of getattr over isinstance prevents the need to import the tensor/operator class causing the cyclical import issue. Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com> Change-Id: If8cee4b1a2562660c6a47e1c7aeb5d7fd4dd1fca
2020-12-18MLBEDSW-3487: Support '<' for tensorsLouis Verhaard
Added __lt__ for Tensor to avoid errors when sorting tensors. Change-Id: I19bb591ef17aa0d4a3389da411bd8863c2218d55 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-12-16MLBEDSW-3465: Add memory settings into sys configDiqing Zhong
Signed-off-by: Diqing Zhong <diqing.zhong@arm.com> Change-Id: I4a5c53d0c5957595fc639b174b2b227ea043d409
2020-12-14MLBEDSW-2066 Improve Exception messagesMichael McGeagh
Minor refactoring to use fstrings. Improve Error classes to correctly inherit the base class. Use existing exception classes instead of plain exceptions where it makes sense. Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com> Change-Id: I0941c04e91010da1db77299517a8e2d896371e77
2020-12-10MLBEDSW-3653: Added type hints to tensor.pyLouis Verhaard
Change-Id: I1b35e039f43471cc0f61cb46ed4d5aff5469d11d Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-12-08MLBEDSW-2836 Change sets to tuplesMichael McGeagh
Replace conditional checks against sets with tuples. If not requiring uniqueness, or complex set operations, it is quicker to use tuples instead. Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com> Change-Id: Ie8732c8d46067244963936c53f0ec81adda50372
2020-11-20MLBEDSW-3302: Reject per-channel scaling for unsupported opsDwight Lidman
Vela only supports per-channel scaling for convolution ops. This commit adds a check that puts ops with per-channel scaling on the CPU. A caveat worth mentioning is that neither TensorFlow Lite or TensorFlow Lite Micro support per-channel scaling for the CPU placed op, however the problem is moved away from Vela. This commit also changes a small utility function in supported_operators.py used for docstring formatting. Signed-off-by: Dwight Lidman <dwight.lidman@arm.com> Change-Id: I9ed090592f1d05dd4566d3e54dba1ef405299383
2020-11-20vela: Improve the scaling is equal checkTim Hall
- Improved tensor and scaling query functions - Fixed bug in convert_batched_fc_to_conv Signed-off-by: Tim Hall <tim.hall@arm.com> Change-Id: Ibc3d14036540f27cf5e993beb2163d3e0f5e5933
2020-11-17MLBEDSW-3493: bug fixes in mark_tensorsLouis Verhaard
None inputs and unsupported tensor shapes caused asserts when marking tensor purpose/format. Change-Id: I4498b61576f529c1a594341cfbb6ba278c6e7ec5 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-11-11MLBEDSW-3222: Bias tensors in fast storageAndreas Nevalainen
For IFM streamed cascades bias tensors are read several times. Moves these tensors to fast storage and add DMA commands. Change-Id: I630f6275986c1b5e3f126c925b11e22500fb1128 Signed-off-by: Andreas Nevalainen <andreas.nevalainen@arm.com>
2020-11-10MLBEDSW-2868 Refactor separation of scale + bias tensorsPatrik Gustavsson
Changed so that there is an option to set if Tensor clone should be seen as unique or not. Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: Ie51c1a5e84b535380d498b105aa18ccba1c8b27c
2020-11-03MLBEDSW-2868 Separate scale+bias tensorsPatrik Gustavsson
Separate scale+bias tensors by different equivilence_id. Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: I674341950bc001ac6e4015206995f048a0dfee75
2020-10-30Vela: Fix wrong bandwidthDiqing Zhong
- copy bandwidth compression rate when weight tensor is cloned Signed-off-by: Diqing Zhong <diqing.zhong@arm.com> Change-Id: I41c4c1f7001e8dc12af35695f5f5d02815e28351
2020-10-22MLBEDSW-3285: AttributeError Tensor has no attributeTim Hall
- Fixed typo in Tensor.is_quantized() Signed-off-by: Tim Hall <tim.hall@arm.com> Change-Id: I36156a6aa5aaff01c4f271a6a8325636173225f3
2020-10-21vela: Improve the scaling is equal checkTim Hall
- Fixed and documented both tensor and quant params scaling checks - Added quant params validity check and tensor quantisation check - Added valid tensor checks to some graph optimisation functions Signed-off-by: Tim Hall <tim.hall@arm.com> Change-Id: I8d6e8f03a603d28886dde511672c8399c85b794c
2020-10-08MLBEDSW-3148: Refactor OperationLouis Verhaard
- op.type is now an enum instead of a string - Removed unused operator codes - Refactored some attributes like npu_block_type, fused_activation_function - Refactored operator index calculation - Refactored a number of operator sets Change-Id: I641f65ee375794b7aec42abc0664251ae37d78e8 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-09-25MLBEDSW-2337: Intermediate feature maps in fast storageLouis Verhaard
Attempts to use fast storage for feature maps used in between cascaded passes. This is only relevant for system configurations where feature maps are by default not placed in SRAM, but there is SRAM for fast storage. Change-Id: I207b7cf32cfcb5bea3e6b93c2da1161c4af5221d Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-09-23MLBEDSW-3070: Fix addressing of weightsLouis Verhaard
Assign different equivalence ids to weights with same values but different compression, to ensure correct addressing. Signed-off-by: Louis Verhaard <louis.verhaard@arm.com> Change-Id: I13aabad71520e4f4a78fb2d6a81740bdd4d1256c
2020-09-17MLBEDSW-2809: Redo the Tensor addressingJacob Bohlin
Added a static class TensorAddressMap that stores all Tensor addresses based on their equivalence_id. Made the "address" field into a property which getter and setter looks up/sets the tensor's address in TensorAddressMap. This makes the references to cpu_tensor/npu_tensor obsolete and they have been removed. Addition to scheduler: avoid SRAM spilling if an op has consumers in other subgraphs. Minor rework in LUTState; it will now assign a unique equivalence_id to the SHRAM lut tensor to avoid issues with addressing. The equivalent checks in LUTState now compares the values of the LUT instead of the the equivalence_id. Updated LUT unit tests accordingly. Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com> Change-Id: I41de5a8a4e5f07b77d6544d8d4034b754993e503
2020-09-07[MLBEDSW-2928] Add batching to softmaxFredrik Svedberg
Added batching to softmax by reshaping the input. Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com> Change-Id: I0b516f9bf2410fb86372b229beba4a7280c498cc
2020-08-27Small fix for Softmax regressionJacob Bohlin
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com> Change-Id: I287c24725126c169afec779b921e43c3ab26f739
2020-08-26MLBED-2822 Added CLI-opt for weight size est.Patrik Gustavsson
Added --weight-estimation-scaling, which enables additional scaling of weight compression scale estimate. Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: Idcda41257f44901d3a3f345341e07fb1ae8585a9
2020-08-26MLBEDSW-2847: Fix for TransposeConv crash and u8 output diffJacob Bohlin
Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com> Change-Id: I2cb3f6639e4bb8a984fa3647ee7b4678ed6f5890
2020-08-24MLBEDSW-2688: LeakyRelu rewrite to LUT or MUL/MAXLouis Verhaard
Replaces LeakyRelu operations with LUT activation function when possible, else to a combination of multiplication/maximization. Signed-off-by: Louis Verhaard <louis.verhaard@arm.com> Change-Id: I3d2eb2dba7145997c3cc711d0ef18ab355fbb416
2020-08-21MLBEDSW-2679: Tensor quant comparison is incorrectTim Hall
- Fixed bug with the supported operator check rejecting operators based upon an incorrect comparison of the tensor quantisations Signed-off-by: Tim Hall <tim.hall@arm.com> Change-Id: Ibd0eb50077465d2c515c6ee10394d9b43cdf730c
2020-08-21MLBEDSW-2822 Account for NHCWB16 in scheduler est.Patrik Gustavsson
Added that NHCWB16 is accounted for in the sram estimates in the scheduler, for intermediate buffers in ifm streaming. Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: Icda5e05dd3663935f528f1a06d36d9e1de123cc8
2020-08-18MLBEDSW-2589: Skip weight compression for CPU opsDwight Lidman
This commit fixes a bug where CPU ops were getting passed on as NPU ops in weight_compressor.py due to Operation.find_npu_op() incorrectly returning any op with an 'npu_block_type' attribute (which every op has) as an NPU op. Signed-off-by: Dwight Lidman <dwight.lidman@arm.com> Change-Id: I7a758f8d1b1237907816bc1be7b77aff765ae688
2020-08-17MLBEDSW-2688: Improved LUT supportLouis Verhaard
- Support for more than one 256-byte LUT in SHRAM - No DMA is performed for a LUT that is already located in SHRAM - Added MemArea.Shram, used for LUT, to avoid false address collision asserts during SRAM tensor allocation - Added read access to LUT in memory access calculation Change-Id: If4d1eded5ed029d253f4f5efb2d80495fc3eac99 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-08-14MLBEDSW-2570 Avoid usage of NHCWB16 for some casesPatrik Gustavsson
Avoid usage of NHCWB16 when Stack/Pack/Concat is performed in axis 3, and the "concat start" of each slice to be combined is not a multiple of 16. Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: If3f7b4a3424be3c86fc2dc48e8649ce4c4f49485
2020-08-12vela: Remove redundant import, reuse existing funcMichael McGeagh
We already import numeric_util so no need to import it again for one func Also replace handcoded full shape code with one already existing in numeric_util Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com> Change-Id: Ib569409fbfd457a7b4b99006d51d9c43f25a1c2c
2020-08-12MLBEDSW-2637 Refactor util funcs out of softmax.pyMichael McGeagh
There were a number of "TensorUtil" functions defined in softmax.py These have been moved to their respective classes for Tensor and Operator respectively. Two of the functions were not a simple tensor/op function. These helper functions have been moved to tensor.py for the simple fact that they return Tensor's Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com> Change-Id: I17d39c4e11f0837b7867b4a54da2e4a56383e095
2020-08-05[MLBEDSW-2335] SoftMax int16Fredrik Svedberg
Added graph rewrite of Softmax for int16. Change-Id: Id7885af6056a23e8b8362fb61ae94283251eb398 Signed-off-by: Fredrik Svedberg <fredrik.svedberg@arm.com>
2020-07-30vela: Move common functionalityMichael McGeagh
There is a repeating pattern of setting the 3 different shapes in a tensor to a single shape value. This adds a new function in the tensor class that does this for you. Changed existing instances of manually setting shape to use this new function. Signed-off-by: Michael McGeagh <michael.mcgeagh@arm.com> Change-Id: Ibc74e741ea47cec473e6be42cc102f721ec63b11
2020-07-14MLBEDSW-1538: Output diff for elementwise min/maxDwight Lidman
This commit adds a quantization restriction check for supported operators, so that operators with different quantization between its IFM (1/2) and OFM tensors that do not support it, are correctly placed on the CPU. The quantization between two tensors is compared using a new equality function implemented for the QuantizationParameters class. Signed-off-by: Dwight Lidman <dwight.lidman@arm.com> Change-Id: I70ff36b4ab4955f328d6e6e699f00dbc43c0404a
2020-07-07MLBEDSW-2548: Fix for Double Buffer size estimateJacob Bohlin
This will give a worst case estimate of the Double Buffer size in the Scheduler and it will no longer be able to choose strategies that end up with a buffer that doesn't fit in SRAM. Signed-off-by: Jacob Bohlin <jacob.bohlin@arm.com> Change-Id: I763731f63c7672679f3b8cd6db65dad03b946ae5
2020-07-02MLBEDSW-2340: Make the tensor address default NoneCharles Xu
Signed-off-by: Charles Xu <charles.xu@arm.com> Change-Id: I53d9d56acee57cff208dccb4822c1f1a461c416d
2020-06-25vela: MLBEDSW-828 weight/scale stream interleavingTim Hall
- Multicore weight and scale stream interleaving for multicore hardware architecture. Change-Id: Ic82850463391c629d90d08c26cf0c48dd438286d Signed-off-by: Tim Hall <tim.hall@arm.com>
2020-06-25MLBEDSW-2306 Added more supported mem-cfgsPatrik Gustavsson
Additional supported memory configurations: -Permanent_storage = DRAM -Tensor arena either in DRAM or SRAM Signed-off-by: Patrik Gustavsson <patrik.gustavsson@arm.com> Change-Id: I20beb7151e306bfdba540e7c0b2a7b478b4d94e1
2020-06-18MLBEDSW-2528: MLCE-219: Custom operator pass throughTim Hall
- Fixed custom operator pass through - Added error printing functions for operators and tensor - Minor cleanup of custom exception handling Signed-off-by: Tim Hall <tim.hall@arm.com> Change-Id: Idf295df1e4c544381dc480244d880c32fb285e38
2020-06-18MLBEDSW-2420: Improved support for dilated convolutionLouis Verhaard
- Dilation added to SET_KERNEL_STRIDE instruction - Kernel height/width adjusted for dilation - Updated padding calculation - Updated weight compression Change-Id: I0c8190223e223b039a305aba0f37896ae1de2b80 Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-06-18MLBEDSW-1941: Bug fix shared weightsLouis Verhaard
If same weight tensor was used with different block configs, errors would occur. Fixed by always cloning weight tensors, using a global weight compression cache and modifying the linear allocator to detect multiple usage of same weight compression. Change-Id: I91ca59176e1c59c66e0ac7a4227f2b5f0b47053f Signed-off-by: Louis Verhaard <louis.verhaard@arm.com>
2020-06-18MLBEDSW-1971: Verify ifm block size calculation against specificationDwight Lidman
This commit ensures the IFM block size calculation in architecture_features.py matches the specification by correctly setting the ifm upscaling factor based on the upscaling mode. This requires adding an attribute to the Tensor object which stores the upscaling mode for that specific tensor and making sure that information is correctly carried over to shared_buffer_allocation.py. Signed-off-by: Dwight Lidman <dwight.lidman@arm.com> Change-Id: I4ab56086f4c694d3bf759bbad30cdb969b4a26db