aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPablo Marquez Tello <pablo.tello@arm.com>2022-07-20 09:16:20 +0100
committerPablo Marquez Tello <pablo.tello@arm.com>2022-07-20 10:36:30 +0000
commit3964f17fd46a8b1ee39ea10408d3825c9a67af0b (patch)
tree0cbcdbaa998a6dad8b6c023998b77ee460f58ce6
parent553f6953fe3bdfad53c11c25f305a16d79d83b24 (diff)
downloadComputeLibrary-3964f17fd46a8b1ee39ea10408d3825c9a67af0b.tar.gz
Remove data extraction scripts
* Resolved MLCE-886 Change-Id: I3b8fbe662c715b82c08c63fa27892471a572fdd8 Signed-off-by: Pablo Marquez Tello <pablo.tello@arm.com> Reviewed-on: https://review.mlplatform.org/c/ml/ComputeLibrary/+/7945 Tested-by: Arm Jenkins <bsgcomp@arm.com> Reviewed-by: Gunes Bayir <gunes.bayir@arm.com> Benchmark: Gunes Bayir <gunes.bayir@arm.com> Comments-Addressed: Arm Jenkins <bsgcomp@arm.com>
-rw-r--r--docs/03_scripts.dox178
-rw-r--r--docs/ComputeLibrary.dir8
-rwxr-xr-xscripts/caffe_data_extractor.py45
-rwxr-xr-xscripts/tensorflow_data_extractor.py51
4 files changed, 0 insertions, 282 deletions
diff --git a/docs/03_scripts.dox b/docs/03_scripts.dox
deleted file mode 100644
index e66bb402fe..0000000000
--- a/docs/03_scripts.dox
+++ /dev/null
@@ -1,178 +0,0 @@
-///
-/// Copyright (c) 2017-2020 Arm Limited.
-///
-/// SPDX-License-Identifier: MIT
-///
-/// Permission is hereby granted, free of charge, to any person obtaining a copy
-/// of this software and associated documentation files (the "Software"), to
-/// deal in the Software without restriction, including without limitation the
-/// rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
-/// sell copies of the Software, and to permit persons to whom the Software is
-/// furnished to do so, subject to the following conditions:
-///
-/// The above copyright notice and this permission notice shall be included in all
-/// copies or substantial portions of the Software.
-///
-/// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-/// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-/// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-/// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-/// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-/// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-/// SOFTWARE.
-///
-namespace arm_compute
-{
-/**
-@page data_import Importing data from existing models
-
-@tableofcontents
-
-@section caffe_data_extractor Extract data from pre-trained caffe model
-
-One can find caffe <a href="https://github.com/BVLC/caffe/wiki/Model-Zoo">pre-trained models</a> on
-caffe's official github repository.
-
-The caffe_data_extractor.py provided in the scripts folder is an example script that shows how to
-extract the parameter values from a trained model.
-
-@note complex networks might require altering the script to properly work.
-
-@subsection caffe_how_to How to use the script
-
-Install caffe following <a href="http://caffe.berkeleyvision.org/installation.html">caffe's document</a>.
-Make sure the pycaffe has been added into the PYTHONPATH.
-
-Download the pre-trained caffe model.
-
-Run the caffe_data_extractor.py script by
-
- python caffe_data_extractor.py -m <caffe model> -n <caffe netlist>
-
-For example, to extract the data from pre-trained caffe Alex model to binary file:
-
- python caffe_data_extractor.py -m /path/to/bvlc_alexnet.caffemodel -n /path/to/caffe/models/bvlc_alexnet/deploy.prototxt
-
-The script has been tested under Python2.7.
-
-@subsection caffe_result What is the expected output from the script
-
-If the script runs successfully, it prints the names and shapes of each layer onto the standard
-output and generates *.npy files containing the weights and biases of each layer.
-
-The arm_compute::utils::load_trained_data shows how one could load
-the weights and biases into tensor from the .npy file by the help of Accessor.
-
-@section tensorflow_data_extractor Extract data from pre-trained tensorflow model
-
-The script tensorflow_data_extractor.py extracts trainable parameters (e.g. values of weights and biases) from a
-trained tensorflow model. A tensorflow model consists of the following two files:
-
-{model_name}.data-{step}-{global_step}: A binary file containing values of each variable.
-
-{model_name}.meta: A binary file containing a MetaGraph struct which defines the graph structure of the neural
-network.
-
-@note Since Tensorflow version 0.11 the binary checkpoint file which contains the values for each parameter has the format of:
- {model_name}.data-{step}-of-{max_step}
-instead of:
- {model_name}.ckpt
-When dealing with binary files with version >= 0.11, only pass {model_name} to -m option;
-when dealing with binary files with version < 0.11, pass the whole file name {model_name}.ckpt to -m option.
-
-@note This script relies on the parameters to be extracted being in the
-'trainable_variables' tensor collection. By default all variables are automatically added to this collection unless
-specified otherwise by the user. Thus should a user alter this default behavior and/or want to extract parameters from other
-collections, tf.GraphKeys.TRAINABLE_VARIABLES should be replaced accordingly.
-
-@subsection tensorflow_how_to How to use the script
-
-Install tensorflow and numpy.
-
-Download the pre-trained tensorflow model.
-
-Run tensorflow_data_extractor.py with
-
- python tensorflow_data_extractor -m <path_to_binary_checkpoint_file> -n <path_to_metagraph_file>
-
-For example, to extract the data from pre-trained tensorflow Alex model to binary files:
-
- python tensorflow_data_extractor -m /path/to/bvlc_alexnet -n /path/to/bvlc_alexnet.meta
-
-Or for binary checkpoint files before Tensorflow 0.11:
-
- python tensorflow_data_extractor -m /path/to/bvlc_alexnet.ckpt -n /path/to/bvlc_alexnet.meta
-
-@note with versions >= Tensorflow 0.11 only model name is passed to the -m option
-
-The script has been tested with Tensorflow 1.2, 1.3 on Python 2.7.6 and Python 3.4.3.
-
-@subsection tensorflow_result What is the expected output from the script
-
-If the script runs successfully, it prints the names and shapes of each parameter onto the standard output and generates
- *.npy files containing the weights and biases of each layer.
-
-The arm_compute::utils::load_trained_data shows how one could load
-the weights and biases into tensor from the .npy file by the help of Accessor.
-
-@section tf_frozen_model_extractor Extract data from pre-trained frozen tensorflow model
-
-The script tf_frozen_model_extractor.py extracts trainable parameters (e.g. values of weights and biases) from a
-frozen trained Tensorflow model.
-
-@subsection tensorflow_frozen_how_to How to use the script
-
-Install Tensorflow and NumPy.
-
-Download the pre-trained Tensorflow model and freeze the model using the architecture and the checkpoint file.
-
-Run tf_frozen_model_extractor.py with
-
- python tf_frozen_model_extractor -m <path_to_frozen_pb_model_file> -d <path_to_store_parameters>
-
-For example, to extract the data from pre-trained Tensorflow model to binary files:
-
- python tf_frozen_model_extractor -m /path/to/inceptionv3.pb -d ./data
-
-@subsection tensorflow_frozen_result What is the expected output from the script
-
-If the script runs successfully, it prints the names and shapes of each parameter onto the standard output and generates
- *.npy files containing the weights and biases of each layer.
-
-The arm_compute::utils::load_trained_data shows how one could load
-the weights and biases into tensor from the .npy file by the help of Accessor.
-
-@section validate_examples Validating examples
-
-Compute Library provides a list of graph examples that are used in the context of integration and performance testing.
-The provenance of each model is part of its documentation and no structural or data alterations have been applied to any
-of them unless explicitly specified otherwise in the documentation.
-
-Using one of the provided scripts will generate files containing the trainable parameters.
-
-You can validate a given graph example on a list of inputs by running:
-
- LD_LIBRARY_PATH=lib ./<graph_example> --validation-range='<validation_range>' --validation-file='<validation_file>' --validation-path='/path/to/test/images/' --data='/path/to/weights/'
-
-e.g:
-
-LD_LIBRARY_PATH=lib ./bin/graph_alexnet --target=CL --layout=NHWC --type=F32 --threads=4 --validation-range='16666,24998' --validation-file='val.txt' --validation-path='images/' --data='data/'
-
-where:
- validation file is a plain document containing a list of images along with their expected label value.
- e.g:
-
- val_00000001.JPEG 65
- val_00000002.JPEG 970
- val_00000003.JPEG 230
- val_00000004.JPEG 809
- val_00000005.JPEG 516
-
- --validation-range is the index range of the images within the validation file you want to check:
- e.g:
-
- --validation-range='100,200' will validate 100 images starting from 100th one in the validation file.
-
- This can be useful when parallelizing the validation process is needed.
-*/
-}
diff --git a/docs/ComputeLibrary.dir b/docs/ComputeLibrary.dir
index e92cd72c37..ab9dfc1b93 100644
--- a/docs/ComputeLibrary.dir
+++ b/docs/ComputeLibrary.dir
@@ -198,14 +198,6 @@
* @brief Utility scripts.
*/
-/** @file scripts/caffe_data_extractor.py
- * @brief Basic script to export weights from Caffe to npy files.
- */
-
-/** @file scripts/tensorflow_data_extractor.py
- * @brief Basic script to export weights from TensorFlow to npy files.
- */
-
/** @dir src
* @brief Source code implementing all the arm_compute headers.
*/
diff --git a/scripts/caffe_data_extractor.py b/scripts/caffe_data_extractor.py
deleted file mode 100755
index 47d24b265f..0000000000
--- a/scripts/caffe_data_extractor.py
+++ /dev/null
@@ -1,45 +0,0 @@
-#!/usr/bin/env python
-"""Extracts trainable parameters from Caffe models and stores them in numpy arrays.
-Usage
- python caffe_data_extractor -m path_to_caffe_model_file -n path_to_caffe_netlist
-
-Saves each variable to a {variable_name}.npy binary file.
-
-Tested with Caffe 1.0 on Python 2.7
-"""
-import argparse
-import caffe
-import os
-import numpy as np
-
-
-if __name__ == "__main__":
- # Parse arguments
- parser = argparse.ArgumentParser('Extract Caffe net parameters')
- parser.add_argument('-m', dest='modelFile', type=str, required=True, help='Path to Caffe model file')
- parser.add_argument('-n', dest='netFile', type=str, required=True, help='Path to Caffe netlist')
- args = parser.parse_args()
-
- # Create Caffe Net
- net = caffe.Net(args.netFile, 1, weights=args.modelFile)
-
- # Read and dump blobs
- for name, blobs in net.params.iteritems():
- print('Name: {0}, Blobs: {1}'.format(name, len(blobs)))
- for i in range(len(blobs)):
- # Weights
- if i == 0:
- outname = name + "_w"
- # Bias
- elif i == 1:
- outname = name + "_b"
- else:
- continue
-
- varname = outname
- if os.path.sep in varname:
- varname = varname.replace(os.path.sep, '_')
- print("Renaming variable {0} to {1}".format(outname, varname))
- print("Saving variable {0} with shape {1} ...".format(varname, blobs[i].data.shape))
- # Dump as binary
- np.save(varname, blobs[i].data)
diff --git a/scripts/tensorflow_data_extractor.py b/scripts/tensorflow_data_extractor.py
deleted file mode 100755
index 1dbf0e127e..0000000000
--- a/scripts/tensorflow_data_extractor.py
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/usr/bin/env python
-"""Extracts trainable parameters from Tensorflow models and stores them in numpy arrays.
-Usage
- python tensorflow_data_extractor -m path_to_binary_checkpoint_file -n path_to_metagraph_file
-
-Saves each variable to a {variable_name}.npy binary file.
-
-Note that since Tensorflow version 0.11 the binary checkpoint file which contains the values for each parameter has the format of:
- {model_name}.data-{step}-of-{max_step}
-instead of:
- {model_name}.ckpt
-When dealing with binary files with version >= 0.11, only pass {model_name} to -m option;
-when dealing with binary files with version < 0.11, pass the whole file name {model_name}.ckpt to -m option.
-
-Also note that this script relies on the parameters to be extracted being in the
-'trainable_variables' tensor collection. By default all variables are automatically added to this collection unless
-specified otherwise by the user. Thus should a user alter this default behavior and/or want to extract parameters from other
-collections, tf.GraphKeys.TRAINABLE_VARIABLES should be replaced accordingly.
-
-Tested with Tensorflow 1.2, 1.3 on Python 2.7.6 and Python 3.4.3.
-"""
-import argparse
-import numpy as np
-import os
-import tensorflow as tf
-
-
-if __name__ == "__main__":
- # Parse arguments
- parser = argparse.ArgumentParser('Extract Tensorflow net parameters')
- parser.add_argument('-m', dest='modelFile', type=str, required=True, help='Path to Tensorflow checkpoint binary\
- file. For Tensorflow version >= 0.11, only include model name; for Tensorflow version < 0.11, include\
- model name with ".ckpt" extension')
- parser.add_argument('-n', dest='netFile', type=str, required=True, help='Path to Tensorflow MetaGraph file')
- args = parser.parse_args()
-
- # Load Tensorflow Net
- saver = tf.train.import_meta_graph(args.netFile)
- with tf.Session() as sess:
- # Restore session
- saver.restore(sess, args.modelFile)
- print('Model restored.')
- # Save trainable variables to numpy arrays
- for t in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES):
- varname = t.name
- if os.path.sep in t.name:
- varname = varname.replace(os.path.sep, '_')
- print("Renaming variable {0} to {1}".format(t.name, varname))
- print("Saving variable {0} with shape {1} ...".format(varname, t.shape))
- # Dump as binary
- np.save(varname, sess.run(t))