aboutsummaryrefslogtreecommitdiff
path: root/docs/03_scripts.dox
diff options
context:
space:
mode:
authorSiCong Li <sicong.li@arm.com>2017-08-23 11:02:43 +0100
committerAnthony Barbier <anthony.barbier@arm.com>2018-11-02 16:35:24 +0000
commit86b53339679e12c952a24a8845a5409ac3d52de6 (patch)
tree807c897ca1001f22b1906d285488877a287b482b /docs/03_scripts.dox
parent70e9bc21682f4eaedaceb632f594f588cb2c91fc (diff)
downloadComputeLibrary-86b53339679e12c952a24a8845a5409ac3d52de6.tar.gz
COMPMID-514 (3RDPARTY_UPDATE)(DATA_UPDATE) Add support to load .npy data
* Add tensorflow_data_extractor script. * Incorporate 3rdparty npy reader libnpy. * Port AlexNet system test to validation_new. * Port LeNet5 system test to validation_new. * Update 3rdparty/ and data/ submodules. Change-Id: I156d060fe9185cd8db810b34bf524cbf5cb34f61 Reviewed-on: http://mpd-gerrit.cambridge.arm.com/84914 Reviewed-by: Anthony Barbier <anthony.barbier@arm.com> Tested-by: Kaizen <jeremy.johnson+kaizengerrit@arm.com>
Diffstat (limited to 'docs/03_scripts.dox')
-rw-r--r--docs/03_scripts.dox68
1 files changed, 60 insertions, 8 deletions
diff --git a/docs/03_scripts.dox b/docs/03_scripts.dox
index a91a93166b..2fd3907978 100644
--- a/docs/03_scripts.dox
+++ b/docs/03_scripts.dox
@@ -9,9 +9,9 @@ One can find caffe <a href="https://github.com/BVLC/caffe/wiki/Model-Zoo">pre-tr
caffe's official github repository.
The caffe_data_extractor.py provided in the @ref scripts folder is an example script that shows how to
-extract the hyperparameter values from a trained model.
+extract the parameter values from a trained model.
-@note complex networks might require alter the script to properly work.
+@note complex networks might require altering the script to properly work.
@subsection how_to How to use the script
@@ -22,19 +22,71 @@ Download the pre-trained caffe model.
Run the caffe_data_extractor.py script by
- ./caffe_data_extractor.py -m <caffe model> -n <caffe netlist>
+ python caffe_data_extractor.py -m <caffe model> -n <caffe netlist>
For example, to extract the data from pre-trained caffe Alex model to binary file:
- ./caffe_data_extractor.py -m /path/to/bvlc_alexnet.caffemodel -n /path/to/caffe/models/bvlc_alexnet/deploy.prototxt
+ python caffe_data_extractor.py -m /path/to/bvlc_alexnet.caffemodel -n /path/to/caffe/models/bvlc_alexnet/deploy.prototxt
The script has been tested under Python2.7.
-@subsection result What is the expected ouput from the script
+@subsection result What is the expected output from the script
-If the script run succesfully, it prints the shapes of each layer onto the standard
-output and generates *.dat files containing the weights and biases of each layer.
+If the script runs successfully, it prints the names and shapes of each layer onto the standard
+output and generates *.npy files containing the weights and biases of each layer.
The @ref arm_compute::utils::load_trained_data shows how one could load
-the weights and biases into tensor from the .dat file by the help of Accessor.
+the weights and biases into tensor from the .npy file by the help of Accessor.
+
+@section tensorflow_data_extractor Extract data from pre-trained tensorflow model
+
+The script tensorflow_data_extractor.py extracts trainable parameters (e.g. values of weights and biases) from a
+trained tensorflow model. A tensorflow model consists of the following two files:
+
+{model_name}.data-{step}-{global_step}: A binary file containing values of each variable.
+
+{model_name}.meta: A binary file containing a MetaGraph struct which defines the graph structure of the neural
+network.
+
+@note Since Tensorflow version 0.11 the binary checkpoint file which contains the values for each parameter has the format of:
+ {model_name}.data-{step}-of-{max_step}
+instead of:
+ {model_name}.ckpt
+When dealing with binary files with version >= 0.11, only pass {model_name} to -m option;
+when dealing with binary files with version < 0.11, pass the whole file name {model_name}.ckpt to -m option.
+
+@note This script relies on the parameters to be extracted being in the
+'trainable_variables' tensor collection. By default all variables are automatically added to this collection unless
+specified otherwise by the user. Thus should a user alter this default behavior and/or want to extract parameters from other
+collections, tf.GraphKeys.TRAINABLE_VARIABLES should be replaced accordingly.
+
+@subsection how_to How to use the script
+
+Install tensorflow and numpy.
+
+Download the pre-trained tensorflow model.
+
+Run tensorflow_data_extractor.py with
+
+ python tensorflow_data_extractor -m <path_to_binary_checkpoint_file> -n <path_to_metagraph_file>
+
+For example, to extract the data from pre-trained tensorflow Alex model to binary files:
+
+ python tensorflow_data_extractor -m /path/to/bvlc_alexnet -n /path/to/bvlc_alexnet.meta
+
+Or for binary checkpoint files before Tensorflow 0.11:
+
+ python tensorflow_data_extractor -m /path/to/bvlc_alexnet.ckpt -n /path/to/bvlc_alexnet.meta
+
+@note with versions >= Tensorflow 0.11 only model name is passed to the -m option
+
+The script has been tested with Tensorflow 1.2, 1.3 on Python 2.7.6 and Python 3.4.3.
+
+@subsection result What is the expected output from the script
+
+If the script runs successfully, it prints the names and shapes of each parameter onto the standard output and generates
+ *.npy files containing the weights and biases of each layer.
+
+The @ref arm_compute::utils::load_trained_data shows how one could load
+the weights and biases into tensor from the .npy file by the help of Accessor.
*/