OpenVINO 2022.3 LTSがリリースされました
ハードウェア面ではIntel Arc GPUに対応しています。
This is a Long-Term Support (LTS) release. LTS releases are released every year and supported for 2 years (1 year of bug fixes, and 2 years for security patches). Read Intel® Distribution of OpenVINO™ toolkit Long-Term Support (LTS) Policy v.2 to get details.
- 2022.3 LTS release provides functional bug fixes, and capability changes for the previous 2022.2 release. This new release empowers developers with new performance enhancements, more deep learning models, more device portability and higher inferencing performance with less code changes.
- Broader model and hardware support – Optimize & deploy with ease across an expanded range of deep learning models including NLP, and access AI acceleration across an expanded range of hardware.
- Full support for 4th Generation Intel® Xeon® Scalable processor family (code name Sapphire Rapids) for deep learning inferencing workloads from edge to cloud.
- Full support for Intel’s discrete graphics cards, such as Intel® Data Center GPU Flex Series, and Intel® Arc™ GPU for DL inferencing workloads in the intelligent cloud, edge, and media analytics workloads.
- Improved performance when leveraging throughput hint on CPU plugin for 12th and 13th Generation Intel® Core™ processor family (code named Alder Lake and Raptor Lake).
- Enhanced “Cumulative throughput” and selection of compute modes added to AUTO functionality, enabling multiple accelerators (e.g. multiple GPUs) to be used at once to maximize inferencing performance.
- Expanded model coverage – Optimize & deploy with ease across an expanded range of deep learning models.
- Broader support for NLP models and use cases like text to speech and voice recognition.
- Continued performance enhancements for computer vision models Including StyleGAN2, Stable Diffusion, PyTorch RAFT and YOLOv7.
- Significant quality and model performance improvements on Intel GPUs compared to the previous OpenVINO toolkit release.
- New Jupyter notebook tutorials for Stable Diffusion text-to-image generation, YOLOv7 optimization and 3D Point Cloud Segmentation.
- Improved API and More Integrations – Easier to adopt and maintain code. Requires fewer code changes, aligns better with frameworks, & minimizes conversion
- Preview of TensorFlow Front End – Load TensorFlow models directly into OpenVINO Runtime and easily export OpenVINO IR format without offline conversion. New “–use_new_frontend” flag enables this preview – see further details below in Model Optimizer section of release notes.
- NEW: Hugging Face Optimum Intel – Gain the performance benefits of OpenVINO (including NNCF) when using Hugging Face Transformers. Initial release supports PyTorch models.
- Intel® oneAPI Deep Neural Network Library (oneDNN) has been updated to 2.7 for further refinements and significant improvements in performance for the latest Intel CPU and GPU processors.
- Introducing C API 2.0, to support new features introduced in OpenVINO API 2.0, such as dynamic shapes with CPU, pre-processing and post-process API, unified property definition and usage. The new C API 2.0 shares the same library files as the 1.0 API, but with a different header file.
- Note: Intel® Movidius ™ VPU based products are not supported in this release, but will be added back in a future OpenVINO 2022.3.1 LTS update. In the meantime, for support on those products please use OpenVINO 2022.1.
- Note: Macintosh* computers using the M1* processor can now install OpenVINO and use the OpenVINO ARM* Device Plug-in on OpenVINO 2022.3 LTS and later. This plugin is community supported; no support is provided by Intel and it doesn’t fall under the LTS 2-year support policy. Learn more here: https://docs.openvino.ai/2022.3/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html
- OpenVINO runtime Movidius™ VPU deprecation notice: