OpenVINOで提供されているモデルでは、Inference Engine IR Formatが提供されていないものもあります
IR Formatのモデルは、Open Model Zoo内のサンプルでも使用します
model converterは、
/opt/intel/openvino/deployment_tools/tools/model_downloader
内に、
converter.pyとして格納されています
./converter.py -h usage: converter.py [-h] [-c CONFIG.YML] [-d DIR] [-o DIR] [--name PAT[,PAT...]] [--list FILE.LST] [--all] [--print_all] [--precisions PREC[,PREC...]] [-p PYTHON] [--mo MO.PY] [--add-mo-arg ARG] [--dry-run] [-j JOBS] optional arguments: -h, --help show this help message and exit -c CONFIG.YML, --config CONFIG.YML model configuration file (deprecated) -d DIR, --download_dir DIR root of the directory tree with downloaded model files -o DIR, --output_dir DIR root of the directory tree to place converted files into --name PAT[,PAT...] convert only models whose names match at least one of the specified patterns --list FILE.LST convert only models whose names match at least one of the patterns in the specified file --all convert all available models --print_all print all available models --precisions PREC[,PREC...] run only conversions that produce models with the specified precisions -p PYTHON, --python PYTHON Python executable to run Model Optimizer with --mo MO.PY Model Optimizer entry point script --add-mo-arg ARG Extra argument to pass to Model Optimizer --dry-run Print the conversion commands without running them -j JOBS, --jobs JOBS number of conversions to run concurrently
–allを使用すると、全てのモデルが変換できます
–nameを使用すると、特定のモデルが使用できます
/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/converter.py --name ssd300 -d ~/models/ --precisions=FP16 ========= Converting ssd300 to IR (FP16) Conversion command: /usr/bin/python3 -- /opt/intel/openvino_2020.1.023/deployment_tools/model_optimizer/mo.py --framework=caffe --data_type=FP16 --output_dir=/home/klf/models/public/ssd300/FP16 --model_name=ssd300 '--input_shape=[1,3,300,300]' --input=data '--mean_values=data[104.0,117.0,123.0]' --output=detection_out --input_model=/home/klf/models/public/ssd300/models/VGGNet/VOC0712Plus/SSD_300x300_ft/VGG_VOC0712Plus_SSD_300x300_ft_iter_160000.caffemodel --input_proto=/home/klf/models/public/ssd300/models/VGGNet/VOC0712Plus/SSD_300x300_ft/deploy.prototxt Model Optimizer arguments: Common parameters: - Path to the Input Model: /home/klf/models/public/ssd300/models/VGGNet/VOC0712Plus/SSD_300x300_ft/VGG_VOC0712Plus_SSD_300x300_ft_iter_160000.caffemodel - Path for generated IR: /home/klf/models/public/ssd300/FP16 - IR output name: ssd300 - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: data - Output layers: detection_out - Input shapes: [1,3,300,300] - Mean values: data[104.0,117.0,123.0] - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False Caffe specific parameters: - Path to Python Caffe* parser generated from caffe.proto: /opt/intel/openvino_2020.1.023/deployment_tools/model_optimizer/mo/front/caffe/proto - Enable resnet optimization: True - Path to the Input prototxt: /home/klf/models/public/ssd300/models/VGGNet/VOC0712Plus/SSD_300x300_ft/deploy.prototxt - Path to CustomLayersMapping.xml: Default - Path to a mean file: Not specified - Offsets for a mean file: Not specified Model Optimizer version: 2020.1.0-61-gd349c3ba4a [ SUCCESS ] Generated IR version 10 model. [ SUCCESS ] XML file: /home/klf/models/public/ssd300/FP16/ssd300.xml [ SUCCESS ] BIN file: /home/klf/models/public/ssd300/FP16/ssd300.bin [ SUCCESS ] Total execution time: 11.24 seconds. [ SUCCESS ] Memory consumed: 585 MB.
このような形でIRモデルに変換できます
産業用画像処理装置開発、
ゲームコンソール開発、半導体エンジニアなどを経て、
Webエンジニア&マーケティングをやっています
好きな分野はハードウェアとソフトウェアの境界くらい