OpenVINO 2023.0 Release

OpenVINO 2023.0 がリリースされています
https://www.intel.com/content/www/us/en/developer/articles/release-notes/openvino/2023-0.html

この記事を書くのが遅く、すでにBugfix版のOpenVINO2023.0.1もリリースされております。

あまりリリースバージョンでは見ない小数点以下が0のバージョンがリリースされました。

Intelからのリリースを転記します。

Python3.11などのサポートは順調に進んでいる感じがしますが、Intelの製品としてはなかなか挑戦的なアップデートが入っています。
AppleのM1/M2サポートです。
プラットフォームであるOpenVINOなのでどんな環境の上で動いても良いのですが、これがIntelの懐の深い?ところなのかもしれません。

対応するシステム、プラットフォームもかなりしっかり書かれていて、インストール環境に迷わないところも好感が持てます。

New and Changed in 2023.0

  • More integrations, minimizing code changes
    • Now you can load TensorFlow and TensorFlow Lite models directly in OpenVINO Runtime and OpenVINO Model Server. Models are converted automatically. For maximum performance, it is still recommended to convert to OpenVINO Intermediate Representation or IR format before loading the model.  Additionally, we’ve introduced a similar functionality with PyTorch models as a preview feature where you can convert PyTorch models directly without needing to convert to ONNX.  
    • Support for Python 3.11  
    • NEW: C++ developers can now install OpenVINO runtime from Conda Forge​  
    • NEW: ARM processors are now supported in CPU plug-in, including dynamic shapes, full processor performance, and broad sample code/notebook coverage. Officially validated for Raspberry Pi 4 and Apple® Mac M1/M2 
    • Preview: A new Python API has been introduced to allow developers to convert and optimize models directly from Python scripts 
  • Broader model support and optimizations
    • Expanded model support for generative AI: CLIP, BLIP, Stable Diffusion 2.0, text processing models, transformer models (i.e. S-BERT, GPT-J, etc.), and others of note: Detectron2, Paddle Slim, RNN-T, Segment Anything Model (SAM), Whisper, and YOLOv8 to name a few. 
    • Initial support for dynamic shapes on GPU – you no longer need to change to static shapes when leveraging the GPU which is especially important for NLP models. 
    • Neural Network Compression Framework (NNCF) is now the main quantization solution. You can use it for both post-training optimization and quantization-aware training. Try it out: pip install nncf 
  • Portability and performance​
    • CPU plugin now offers thread scheduling on 12th gen Intel® Core and up. You can choose to run inference on E-cores, P-cores, or both, depending on your application’s configurations.  It is now possible to optimize for performance or for power savings as needed. ​
    • NEW: Default Inference Precision – no matter which device you use, OpenVINO will default to the format that enables its optimal performance. For example, FP16 for GPU or BF16 for 4th Generation Intel® Xeon®. You no longer need to convert the model beforehand to specific IR precision, and you still have the option of running in accuracy mode if needed.​ 
    • Model caching on GPU is now improved with more efficient model loading/compiling.

System Requirements

Disclaimer. Certain hardware (including but not limited to GPU and GNA) requires manual installation of specific drivers to work correctly. Drivers might require updates to your operating system, including Linux kernel, please refer to their documentation. Operating system updates should be handled by user and are not part of OpenVINO installation. 

Intel CPU processors with corresponding operating systems 

Intel Atom® processor with Intel® SSE4.2 support 

Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics 

6th – 13th generation Intel® Core™ processors 

Intel® Xeon® Scalable Processors (code name Skylake) 

2nd Generation Intel® Xeon® Scalable Processors (code name Cascade Lake) 

3rd Generation Intel® Xeon® Scalable Processors (code name Cooper Lake and Ice Lake) 

4th Generation Intel® Xeon® Scalable Processors (code name Sapphire Rapids) 

Operating Systems:

  • Ubuntu 22.04 long-term support (LTS), 64-bit (Kernel 5.15+)
  • Ubuntu 20.04 long-term support (LTS), 64-bit (Kernel 5.15+)
  • Ubuntu 18.04 long-term support (LTS) with limitations, 64-bit (Kernel 5.4+)
  • Windows* 10 
  • Windows* 11 
  • macOS* 10.15 and above, 64-bit 
  • Red Hat Enterprise Linux* 8, 64-bit

Intel® Processor Graphics with corresponding operating systems (GEN Graphics) 

Intel® HD Graphics 

Intel® UHD Graphics 

Intel® Iris® Pro Graphics 

Intel® Iris® Xe Graphics 

Intel® Iris® Xe Max Graphics 

Intel® Arc ™ GPU Series 

Intel® Data Center GPU Flex Series  

Operating Systems:

  • Ubuntu* 22.04 long-term support (LTS), 64-bit
  • Ubuntu* 20.04 long-term support (LTS), 64-bit
  • Windows* 10, 64-bit 
  • Windows* 11, 64-bit
  • Red Hat Enterprise Linux* 8, 64-bit

NOTES:

  • This installation requires drivers that are not included in the Intel® Distribution of OpenVINO™ toolkit package. 
  • A chipset that supports processor graphics is required for Intel® Xeon® processors. Processor graphics are not included in all processors. See  Product Specifications for information about your processor. 
  • Although this release works with Ubuntu 20.04 for discrete graphic cards, Ubuntu 20.04 is not POR for discrete graphics drivers, so OpenVINO support is limited. 
  • Recommended OpenCL™ driver’s versions:  22.43 for Ubuntu* 22.04, 22.41 for Ubuntu* 20.04 and 22.28 for Red Hat Enterprise Linux* 8 

Intel® Gaussian & Neural Accelerator 

Operating Systems:

  • Ubuntu* 22.04 long-term support (LTS), 64-bit
  • Ubuntu* 20.04 long-term support (LTS), 64-bit
  • Windows* 10, 64-bit 
  • Windows* 11, 64-bit

Operating system’s and developer’s environment requirements:

  • Linux* OS
    • Ubuntu 22.04 with Linux kernel 5.15+
    • Ubuntu 20.04 with Linux kernel 5.15+
    • RHEL 8 with Linux kernel 5.4
    • A Linux OS build environment needs these components:
    • Higher versions of kernel might be required for 10th Gen Intel® Core™ Processor, 11th Gen Intel® Core™ Processors, 11th Gen Intel® Core™ Processors S-Series Processors, 12th Gen Intel® Core™ Processors, 13th Gen Intel® Core™ Processors,  or 4th Gen Intel® Xeon® Scalable Processors to support CPU, GPU, GNA or hybrid-cores CPU capabilities 
  • Windows* 10 and 11
  • macOS* 10.15 and above
  • DL frameworks versions:
    • TensorFlow* 1.15, 2.12
    • MxNet* 1.9
    • ONNX* 1.13
    • PaddlePaddle* 2.4
    • Note: This package can be installed on other versions of DL Framework but only the specified version above are fully validated.

NOTE: OpenVINO Python binaries and binaries on Windows/CentOS7/MACOS(x86) are built with oneTBB libraries, and others on Ubuntu and Redhat OS systems are built with legacy TBB which is released by OS distribution. OpenVINO supports being built with oneTBB or legacy TBB by a user on all above OS systems. System compatibility and performance were improved on Hybrid CPUs like 12th Gen Intel Core and above.