Age | Commit message (Collapse) | Author |
|
Change-Id: I827b26239043a9e90d26c2583122648d2a45303a
Reviewed-on: https://review.mlplatform.org/317
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
Change-Id: Id0d4a07af24e2331161996083b0c1bab072bd405
Reviewed-on: https://review.mlplatform.org/322
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
Change-Id: I9e6e43a5839d04c2e4b4552c05446efb0a5074cf
Reviewed-on: https://review.mlplatform.org/232
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
|
|
Change-Id: Ic6a1f55f14d53896725afe426bc2e2acb1546589
Reviewed-on: https://review.mlplatform.org/343
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
|
|
Change-Id: I13f6e4c600f39355f69e015409bf30dafdc5e3aa
Reviewed-on: https://review.mlplatform.org/332
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Michele Di Giorgio <michele.digiorgio@arm.com>
|
|
Change-Id: Ie0d5387c0546045e14e62c84c03894a9b0339585
Reviewed-on: https://review.mlplatform.org/335
Reviewed-by: Pablo Marquez <pablo.tello@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
Change-Id: I3de6bb33746d52f8d8c337ab7776eccee8c205fb
Reviewed-on: https://review.mlplatform.org/328
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
Reviewed-by: Pablo Marquez <pablo.tello@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
NELSTM, NEFullyConnectedLayer(For quantised types only), NERNN and NEWinogradLayer were all defaulting to on-the-fly reshaping of B
Fixed a bug in GemmInterleaved: it was ignoring the 'multis' dimension of the tensor to allocate the memory for B reshaped
Change-Id: I7b30f7f57fc65d6a03cccde0bf5515a811f17b54
Reviewed-on: https://review.mlplatform.org/323
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
|
|
Change-Id: Ice653e48211053bd3cd20a693bd76de6b4efc370
Reviewed-on: https://review.mlplatform.org/270
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
Change-Id: I7eae2e55cc0b0b7bbebb7617299daaca6f75f40c
Reviewed-on: https://review.mlplatform.org/292
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Georgios Pinitas <georgios.pinitas@arm.com>
|
|
INESimpleFunctionNoBorder
Change-Id: Ia9fdc75b23e9a6208058f8406fb7b5fcd917de2c
Reviewed-on: https://review.mlplatform.org/311
Tested-by: Arm Jenkins <bsgcomp@arm.com>
Reviewed-by: Michele Di Giorgio <michele.digiorgio@arm.com>
|
|
Fixed Segfault by removing bias iterator from specialized function which assumes no bias
being provided.
Change-Id: Ic897435ee9427d4359e8ab989a03e951da0d7ce0
Reviewed-on: https://review.mlplatform.org/314
Reviewed-by: Anthony Barbier <Anthony.barbier@arm.com>
Tested-by: Arm Jenkins <bsgcomp@arm.com>
|
|
Wrong check in the function
Change-Id: I38e4d5f01039c8352c0e83f0711455af85f9c3fe
|
|
Adds support for Equal,NotEqual,Less,LessEqual,Greater,GreaterEqual
Change-Id: If0cdf4aae7f95c94709b195eee485f6663f45909
|
|
Change-Id: I2a18f0acea382960a8bc71a8f56928a5998f0dd6
|
|
Change-Id: Id74cc7ba8e5cabee6acd3798d4779f88b1f00a9b
|
|
Change-Id: I49b2e8b4200c9ed654736d9451e4ab9c073b4b10
|
|
Change-Id: I29e35024e29781a6b943b568abec9c73649215e6
|
|
Change-Id: I6ee2c0b670727fc808fa636c53ddfaec3a0036c9
|
|
Change-Id: I49f1d865f5e7562f1d80db849353a89ef77e6a9e
|
|
Output of Priorbox should be independent of the input
data layout and should always be in NCHW format
Change-Id: Ie80cd4e51c78945b158c0db1af1923bdf8d7ea7b
|
|
Change-Id: I95fdf5bd85becfe081f6ae587284f3b294681308
|
|
Fixes for:
- ReduceMean, reduction on the X axis for FP16 with 8 elements was
performed only up to a certain point. The fix now takes into account the
number of elements of the vector and does as many reductions as
necessary.
- YOLOLayer, activation for FP16 has to be performed on 32 bits until
the FP16 approximations is fixed.
Change-Id: I75373f4edd37de476e6fe1a56de3ef386b65c619
|
|
-Simplifies import memory interface
-Changes the used of void** handles with appropriate interfaces.
Change-Id: I5918c855c11f46352058864623336b352162a4b7
|
|
-Adds NHWC support for FP16
Change-Id: I61addf8efecf511ac8cd5f8aa9afc3e09c476aaf
|
|
FP mixed precision support added to GEMM kernel used for fp16 winograd conv on Midgard GPUs
Change-Id: I1619beb025fc484a1ac9d3e528d785edabbc7ee6
|
|
Change-Id: I770b044b67d93510ef65e556905135b34be7ea0a
|
|
Change-Id: I6e7dee8bd615a5eff01c523f208a218574ee5eab
|
|
kernels
Change-Id: I98183f95814442b6f3dbb67a1bdae99df05b9b01
|
|
- Fixing a bug for which we did not scale the boxes before transforming them
- Adding the correct_transform_coords option to BoundingBoxTransformInfo
Change-Id: I40281254bcf87e7c8583c119e99562414fe59822
|
|
BoxWithNMSLimitKernel
COMPMID-1792: Accuracy issue in CLGenerateProposals
This patch does the following:
- Some fixes for GenerateProposals function and tests
- Adapting BoxWithNMSLimitKernel to only accept U32 tensors as keeps_size
- Update 3rdparty
- Adds a small tolerance for a GenerateProposals test
Change-Id: Ia8ec1cdfe941fe05003645e86deb9ea6a6044d74
|
|
Introduced F32 accumulation for F16 winograd gemm and output transform
WinogradConvolution will be available for F16 only if fast math flag is enabled
Change-Id: I215593c205236a0f9669218437bb40b184ec6a4f
|
|
With this patch we are able to dispatch a single GPU job also in case of
batched-flatten
Change-Id: I755e7af29d44b24f67fa04bad3c9b7646e8deefc
|
|
Change-Id: I2c2250669829e399fdc2363f729dc5e68d8aac17
|
|
Change-Id: I99e1c3939cfea4b9cb0ddfa313706f31b213ca89
|
|
num_elems_processed was passed as a scale instead of a step
Change-Id: I8c6d58fe4432f9f6beb31c0a1e02204c96775d98
|
|
AccessWindowRectangle::update_window_if_needed()
Change-Id: I56426cc9c9688a0aa0acdd439d5887c7ef208cd2
Note: The code to shrink the window hasn't been fixed yet.
|
|
Change-Id: I69e995973597ba3927d29e4f6ed5438560e53d77
|
|
Change-Id: Ib0798cc17496b7817f5b5769b25d98913a33a69d
|
|
Change-Id: I5bf5d751ec7c02d96c26a769f49d03ea23a248b7
|
|
Change-Id: Ie13a9eb6d417388b5de533bffa895796d9d2cf62
|
|
Change-Id: Ibab049f09413258c99335b7da6b151530a1bd136
|
|
and 8 tensors (Part 1)
Creating special cases for concatening 2 and 4 tensors.
Change-Id: I6a739a494ae45011acb65369e353f9ef96970b90
|
|
NEGEMMLowpQuantizeDownInt32ToUint8ScaleByFixedPoint
Change-Id: I1d5bc4d24059917f9ddef0873dd3043b1f2320a8
|
|
Adds 0.5f after scaling AVG pooling to be able to round to nearest as
vcvtq_u32_f32 rounds towards zero.
Change-Id: I22ce78f9e628cf4184a317edabce47211ab09456
|
|
Removed gemmlowp_mm_bifrost_transposed_dot8 kernel as not used
Change-Id: I43cf463a3a4c0cdb2808621c534ffd5c9fd47ca1
|
|
Increases the steps for calculating invsqrt used in L2 pool by 1 to increase accuracy.
Change-Id: Ib938a963809b07c30d47ec0675abae75bc086986
|
|
Removes:
-sve_interleave_8way_block2_16bit
-sve_interleave_8way_block4_16bit
-sve_sgemm_3VLx8
Change-Id: I0aa35fe974d8e122937dfe8923ecf63ff5a52001
|
|
CLGEMMLowpQuantizeDownInt32ToUint8ScaleByFloat
Since we perform an element-wise operation, it is not necessary to pass the output_depth3d.
Change-Id: Ibfa07a0706e902acf59b444aa61e18a348162ea9
|
|
The issue was related to CLIm2Col when the number of input channels was less than
the number of elements processed by each thread.
The bug has been fixed in the validate_and_configure_window() function setting the correct number of elements accessed
in the output tensor.
Also fixed an issue GEMM3D when we have a single output channel
Change-Id: I094292d0c7662599c4a4c3916ec5f5821df5faef
|