| <!-- HTML header for doxygen 1.8.15--> |
| <!-- Remember to use version doxygen 1.8.15 +--> |
| <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> |
| <html xmlns="http://www.w3.org/1999/xhtml"> |
| <head> |
| <meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/> |
| <meta http-equiv="X-UA-Compatible" content="IE=9"/> |
| <meta name="generator" content="Doxygen 1.8.15"/> |
| <meta name="robots" content="NOINDEX, NOFOLLOW" /> <!-- Prevent indexing by search engines --> |
| <title>Compute Library: Importing data from existing models</title> |
| <link href="tabs.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript" src="jquery.js"></script> |
| <script type="text/javascript" src="dynsections.js"></script> |
| <link href="navtree.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript" src="resize.js"></script> |
| <script type="text/javascript" src="navtreedata.js"></script> |
| <script type="text/javascript" src="navtree.js"></script> |
| <script type="text/javascript"> |
| /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&dn=gpl-2.0.txt GPL-v2 */ |
| $(document).ready(initResizable); |
| /* @license-end */</script> |
| <link href="search/search.css" rel="stylesheet" type="text/css"/> |
| <script type="text/javascript" src="search/searchdata.js"></script> |
| <script type="text/javascript" src="search/search.js"></script> |
| <script type="text/x-mathjax-config"> |
| MathJax.Hub.Config({ |
| extensions: ["tex2jax.js"], |
| jax: ["input/TeX","output/HTML-CSS"], |
| }); |
| </script><script type="text/javascript" async="async" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js"></script> |
| <link href="doxygen.css" rel="stylesheet" type="text/css" /> |
| <link href="stylesheet.css" rel="stylesheet" type="text/css"/> |
| </head> |
| <body> |
| <div id="top"><!-- do not remove this div, it is closed by doxygen! --> |
| <div id="titlearea"> |
| <table cellspacing="0" cellpadding="0"> |
| <tbody> |
| <tr style="height: 56px;"> |
| <img alt="Compute Library" src="https://raw.githubusercontent.com/ARM-software/ComputeLibrary/gh-pages/ACL_logo.png" style="max-width: 100%;margin-top: 15px;margin-left: 10px"/> |
| <td style="padding-left: 0.5em;"> |
| <div id="projectname"> |
|  <span id="projectnumber">19.11</span> |
| </div> |
| </td> |
| </tr> |
| </tbody> |
| </table> |
| </div> |
| <!-- end header part --> |
| <!-- Generated by Doxygen 1.8.15 --> |
| <script type="text/javascript"> |
| /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&dn=gpl-2.0.txt GPL-v2 */ |
| var searchBox = new SearchBox("searchBox", "search",false,'Search'); |
| /* @license-end */ |
| </script> |
| <script type="text/javascript" src="menudata.js"></script> |
| <script type="text/javascript" src="menu.js"></script> |
| <script type="text/javascript"> |
| /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&dn=gpl-2.0.txt GPL-v2 */ |
| $(function() { |
| initMenu('',true,false,'search.php','Search'); |
| $(document).ready(function() { init_search(); }); |
| }); |
| /* @license-end */</script> |
| <div id="main-nav"></div> |
| </div><!-- top --> |
| <div id="side-nav" class="ui-resizable side-nav-resizable"> |
| <div id="nav-tree"> |
| <div id="nav-tree-contents"> |
| <div id="nav-sync" class="sync"></div> |
| </div> |
| </div> |
| <div id="splitbar" style="-moz-user-select:none;" |
| class="ui-resizable-handle"> |
| </div> |
| </div> |
| <script type="text/javascript"> |
| /* @license magnet:?xt=urn:btih:cf05388f2679ee054f2beb29a391d25f4e673ac3&dn=gpl-2.0.txt GPL-v2 */ |
| $(document).ready(function(){initNavTree('data_import.xhtml','');}); |
| /* @license-end */ |
| </script> |
| <div id="doc-content"> |
| <!-- window showing the filter options --> |
| <div id="MSearchSelectWindow" |
| onmouseover="return searchBox.OnSearchSelectShow()" |
| onmouseout="return searchBox.OnSearchSelectHide()" |
| onkeydown="return searchBox.OnSearchSelectKey(event)"> |
| </div> |
| |
| <!-- iframe showing the search results (closed by default) --> |
| <div id="MSearchResultsWindow"> |
| <iframe src="javascript:void(0)" frameborder="0" |
| name="MSearchResults" id="MSearchResults"> |
| </iframe> |
| </div> |
| |
| <div class="PageDoc"><div class="header"> |
| <div class="headertitle"> |
| <div class="title">Importing data from existing models </div> </div> |
| </div><!--header--> |
| <div class="contents"> |
| <div class="toc"><h3>Table of Contents</h3> |
| <ul><li class="level1"><a href="#caffe_data_extractor">Extract data from pre-trained caffe model</a><ul><li class="level2"><a href="#caffe_how_to">How to use the script</a></li> |
| <li class="level2"><a href="#caffe_result">What is the expected output from the script</a></li> |
| </ul> |
| </li> |
| <li class="level1"><a href="#tensorflow_data_extractor">Extract data from pre-trained tensorflow model</a><ul><li class="level2"><a href="#tensorflow_how_to">How to use the script</a></li> |
| <li class="level2"><a href="#tensorflow_result">What is the expected output from the script</a></li> |
| </ul> |
| </li> |
| <li class="level1"><a href="#tf_frozen_model_extractor">Extract data from pre-trained frozen tensorflow model</a><ul><li class="level2"><a href="#tensorflow_frozen_how_to">How to use the script</a></li> |
| <li class="level2"><a href="#tensorflow_frozen_result">What is the expected output from the script</a></li> |
| </ul> |
| </li> |
| <li class="level1"><a href="#validate_examples">Validating examples</a></li> |
| </ul> |
| </div> |
| <div class="textblock"><h1><a class="anchor" id="caffe_data_extractor"></a> |
| Extract data from pre-trained caffe model</h1> |
| <p>One can find caffe <a href="https://github.com/BVLC/caffe/wiki/Model-Zoo">pre-trained models</a> on caffe's official github repository.</p> |
| <p>The caffe_data_extractor.py provided in the scripts folder is an example script that shows how to extract the parameter values from a trained model.</p> |
| <dl class="section note"><dt>Note</dt><dd>complex networks might require altering the script to properly work.</dd></dl> |
| <h2><a class="anchor" id="caffe_how_to"></a> |
| How to use the script</h2> |
| <p>Install caffe following <a href="http://caffe.berkeleyvision.org/installation.html">caffe's document</a>. Make sure the pycaffe has been added into the PYTHONPATH.</p> |
| <p>Download the pre-trained caffe model.</p> |
| <p>Run the caffe_data_extractor.py script by </p><pre class="fragment"> python caffe_data_extractor.py -m <caffe model> -n <caffe netlist> |
| </pre><p>For example, to extract the data from pre-trained caffe Alex model to binary file: </p><pre class="fragment"> python caffe_data_extractor.py -m /path/to/bvlc_alexnet.caffemodel -n /path/to/caffe/models/bvlc_alexnet/deploy.prototxt |
| </pre><p>The script has been tested under Python2.7.</p> |
| <h2><a class="anchor" id="caffe_result"></a> |
| What is the expected output from the script</h2> |
| <p>If the script runs successfully, it prints the names and shapes of each layer onto the standard output and generates *.npy files containing the weights and biases of each layer.</p> |
| <p>The <a class="el" href="namespacearm__compute_1_1utils.xhtml#af214346f90d640ac468dd90fa2a275cc" title="Load the tensor with pre-trained data from a binary file.">arm_compute::utils::load_trained_data</a> shows how one could load the weights and biases into tensor from the .npy file by the help of Accessor.</p> |
| <h1><a class="anchor" id="tensorflow_data_extractor"></a> |
| Extract data from pre-trained tensorflow model</h1> |
| <p>The script tensorflow_data_extractor.py extracts trainable parameters (e.g. values of weights and biases) from a trained tensorflow model. A tensorflow model consists of the following two files:</p> |
| <p>{model_name}.data-{step}-{global_step}: A binary file containing values of each variable.</p> |
| <p>{model_name}.meta: A binary file containing a MetaGraph struct which defines the graph structure of the neural network.</p> |
| <dl class="section note"><dt>Note</dt><dd>Since Tensorflow version 0.11 the binary checkpoint file which contains the values for each parameter has the format of: {model_name}.data-{step}-of-{max_step} instead of: {model_name}.ckpt When dealing with binary files with version >= 0.11, only pass {model_name} to -m option; when dealing with binary files with version < 0.11, pass the whole file name {model_name}.ckpt to -m option.</dd> |
| <dd> |
| This script relies on the parameters to be extracted being in the 'trainable_variables' tensor collection. By default all variables are automatically added to this collection unless specified otherwise by the user. Thus should a user alter this default behavior and/or want to extract parameters from other collections, tf.GraphKeys.TRAINABLE_VARIABLES should be replaced accordingly.</dd></dl> |
| <h2><a class="anchor" id="tensorflow_how_to"></a> |
| How to use the script</h2> |
| <p>Install tensorflow and numpy.</p> |
| <p>Download the pre-trained tensorflow model.</p> |
| <p>Run tensorflow_data_extractor.py with </p><pre class="fragment"> python tensorflow_data_extractor -m <path_to_binary_checkpoint_file> -n <path_to_metagraph_file> |
| </pre><p>For example, to extract the data from pre-trained tensorflow Alex model to binary files: </p><pre class="fragment"> python tensorflow_data_extractor -m /path/to/bvlc_alexnet -n /path/to/bvlc_alexnet.meta |
| </pre><p>Or for binary checkpoint files before Tensorflow 0.11: </p><pre class="fragment"> python tensorflow_data_extractor -m /path/to/bvlc_alexnet.ckpt -n /path/to/bvlc_alexnet.meta |
| </pre><dl class="section note"><dt>Note</dt><dd>with versions >= Tensorflow 0.11 only model name is passed to the -m option</dd></dl> |
| <p>The script has been tested with Tensorflow 1.2, 1.3 on Python 2.7.6 and Python 3.4.3.</p> |
| <h2><a class="anchor" id="tensorflow_result"></a> |
| What is the expected output from the script</h2> |
| <p>If the script runs successfully, it prints the names and shapes of each parameter onto the standard output and generates .npy files containing the weights and biases of each layer.</p> |
| <p>The <a class="el" href="namespacearm__compute_1_1utils.xhtml#af214346f90d640ac468dd90fa2a275cc" title="Load the tensor with pre-trained data from a binary file.">arm_compute::utils::load_trained_data</a> shows how one could load the weights and biases into tensor from the .npy file by the help of Accessor.</p> |
| <h1><a class="anchor" id="tf_frozen_model_extractor"></a> |
| Extract data from pre-trained frozen tensorflow model</h1> |
| <p>The script tf_frozen_model_extractor.py extracts trainable parameters (e.g. values of weights and biases) from a frozen trained Tensorflow model.</p> |
| <h2><a class="anchor" id="tensorflow_frozen_how_to"></a> |
| How to use the script</h2> |
| <p>Install Tensorflow and NumPy.</p> |
| <p>Download the pre-trained Tensorflow model and freeze the model using the architecture and the checkpoint file.</p> |
| <p>Run tf_frozen_model_extractor.py with </p><pre class="fragment"> python tf_frozen_model_extractor -m <path_to_frozen_pb_model_file> -d <path_to_store_parameters> |
| </pre><p>For example, to extract the data from pre-trained Tensorflow model to binary files: </p><pre class="fragment"> python tf_frozen_model_extractor -m /path/to/inceptionv3.pb -d ./data |
| </pre><h2><a class="anchor" id="tensorflow_frozen_result"></a> |
| What is the expected output from the script</h2> |
| <p>If the script runs successfully, it prints the names and shapes of each parameter onto the standard output and generates .npy files containing the weights and biases of each layer.</p> |
| <p>The <a class="el" href="namespacearm__compute_1_1utils.xhtml#af214346f90d640ac468dd90fa2a275cc" title="Load the tensor with pre-trained data from a binary file.">arm_compute::utils::load_trained_data</a> shows how one could load the weights and biases into tensor from the .npy file by the help of Accessor.</p> |
| <h1><a class="anchor" id="validate_examples"></a> |
| Validating examples</h1> |
| <p>Using one of the provided scripts will generate files containing the trainable parameters.</p> |
| <p>You can validate a given graph example on a list of inputs by running: </p><pre class="fragment">LD_LIBRARY_PATH=lib ./<graph_example> --validation-range='<validation_range>' --validation-file='<validation_file>' --validation-path='/path/to/test/images/' --data='/path/to/weights/' |
| </pre><p>e.g:</p> |
| <p>LD_LIBRARY_PATH=lib ./bin/graph_alexnet –target=CL –layout=NHWC –type=F32 –threads=4 –validation-range='16666,24998' –validation-file='val.txt' –validation-path='images/' –data='data/'</p> |
| <p>where: validation file is a plain document containing a list of images along with their expected label value. e.g: </p><pre class="fragment">val_00000001.JPEG 65 |
| val_00000002.JPEG 970 |
| val_00000003.JPEG 230 |
| val_00000004.JPEG 809 |
| val_00000005.JPEG 516 |
| </pre><p>–validation-range is the index range of the images within the validation file you want to check: e.g:</p> |
| <p>–validation-range='100,200' will validate 100 images starting from 100th one in the validation file.</p> |
| <p>This can be useful when parallelizing the validation process is needed. </p> |
| </div></div><!-- PageDoc --> |
| </div><!-- contents --> |
| </div><!-- doc-content --> |
| <!-- start footer part --> |
| <div id="nav-path" class="navpath"><!-- id is needed for treeview function! --> |
| <ul> |
| <li class="footer">Generated on Thu Nov 28 2019 16:53:08 for Compute Library by |
| <a href="http://www.doxygen.org/index.html"> |
| <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.15 </li> |
| </ul> |
| </div> |
| </body> |
| </html> |