OpenVINO 2020.4 Release

OpenVINO 2020.4がリリースされました
https://software.intel.com/en-us/openvino-toolkit
https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html

Intelからのリリースを転記します

Executive Summary

  • Improves performance while maintaining accuracy close to full precision (for example, FP32 data type) by introducing support for the Bfloat16 data type for inferencing using the 3rd generation Intel® Xeon® Scalable processor (formerly code-named Cooper Lake).
  • Increases accuracy when layers have varying bit-widths by extending the Post-Training Optimization Tool to support mixed-precision quantization.
  • Allows greater compatibility of models by supporting directly reading Open Neural Network Exchange (ONNX*) model format to the Inference Engine.For users looking to take full advantage of Intel® Distribution of OpenVINO™ toolkit, it is recommended to follow the native workflow of using the Intermediate Representation from the Model Optimizer as input to the Inference Engine.
  • For users looking to more easily take a converted model in ONNX model format (for example, PyTorch to ONNX using torch.onnx), they are now able to input the ONNX format directly to the Inference Engine to run models on Intel architecture.
  • Enables initial support for TensorFlow* 2.2.0 for computer vision use cases.
  • Enables users to connect to and profile multiple remote hosts; collect and store data in one place for further analysis by extending the Deep Learning Workbench with remote profiling capability.

CooperLakeでの演算が追加、ONNXモデルのサポート、TensorFlow 2.2.0 がサポートされたようです

OpenVINO.jpでは引き続きベンチマークなどを行っていきたいと思います