Accelerate Vision-based AI with Intel® Distribution of OpenVINO™ Toolkit

Inspecting parts for defects before they continue through the manufacturing process. Identifying different items being placed into a shopping basket for a checkout-free retail experience.

Computer vision’s broad range of usage models makes it one of the most exciting AI applications today. However, even after a deep learning model is trained, significant challenges remain before that model can be deployed in the real world. Intel developed the Intel® Distribution of OpenVINO™ toolkit (Open Visual Inference and Neural Network Optimization), which helps software developers and data scientists fast-track the development of high-performance computer vision and deep learning vision applications, to address some of these challenges and shorten the distance between theory to reality and deployment. It includes deep learning deployment tools, computer vision libraries, optimized OpenCV* and media encode/decode functions, several code samples, and more than 20 pre-trained models. It’s a free download. (An open source version of the toolkit is also available1.)

Defining an Approach from Model to Solution

After training a deep learning model, additional steps are needed to deliver a production AI solution:

  • Reducing framework footprint to focus on inference. Training frameworks can be unnecessarily compute-intensive due to the inclusion of code not relevant to the specific deep learning application. Running a smaller core of code means a smaller overall application footprint that can run more quickly and robustly on a wider range of hardware.
  • Assessing and improving performance on target hardware. In many cases, the inference model must deliver its results quickly in order for those results to be valuable. Assessment in a training environment often doesn’t provide an adequate measure of performance in the production environment.
  • Localizing for heterogeneous architectures. The diverse environments and applications to which AI is being applied increase the likelihood that inference will run on different hardware than the compute platform used for training. For best performance, the application needs to be optimized for that particular inference hardware.

The Intel Distribution of OpenVINO toolkit solves these problems to help accelerate computer vision and deep learning performance across multiple types of Intel® processors (details follow).

Streamlining Computer Vision Development and Deployment

The Intel Distribution of OpenVINO toolkit provides three primary capabilities:

  • Deep Learning for Computer Vision. The toolkit helps you accelerate and deploy convolutional neural networks (CNNs) on Intel® architecture using the Intel® Deep Learning Deployment Toolkit (Intel® DL Deployment Toolkit).
  • Traditional Computer Vision. You can easily develop classic computer vision applications built with the optimized OpenCV library or OpenVX* API to improve performance with computer vision libraries and functions to accelerate media encode/decode.
  • Hardware Acceleration. Finally, the toolkit supports a wide variety of Intel® architecture, including general purpose CPUs, GPUs (Intel® Processor Graphics), Intel® FPGAs, and Intel® Movidius™ Vision Processing Units (VPUs).

The Intel DL Deployment Toolkit incorporates a Model Optimizer that takes as input a trained deep learning model with a variety of frameworks supported, including TensorFlow*, Caffe*, MXNet*, and ONNX*. The Optimizer makes modifications to streamline and speed execution, such as eliminating or folding certain model layers or lowering precision to FP16. In the process, the model is distilled into only what is needed for the inference task to occur, removing any framework elements that are used only for training and reducing the size of the deployed model.

These optimizations result in an intermediate representation file that is run through the toolkit’s Inference Engine to generate heterogeneous code particular to the target Intel hardware. The engine includes support for Intel® Xeon® Scalable, Intel® Core™, and Intel Atom® processors, Intel Movidius VPUs and Neural Compute Sticks, and Intel FPGAs, along with other targets. Formerly, these different pieces of hardware required different toolchains, so this unified toolkit makes it much easier to adapt your application for different architectures without having to recode.

Intel DL Deployment Toolkit’s inference engine allows you to then test your model in a production environment and assess model performance against your key performance indicators. If performance doesn’t meet your target, you can do things like lowering precision to half-float (an easy change within the engine), which may increase performance without going to the trouble of retraining your model. Together, the model optimizer and inference engine are the core of integrating deep learning inference with the Intel Distribution of OpenVINO toolkit.

Reducing Time to Solution, Increasing Exploration

In addition to the Model Optimizer and Inference Engine, the toolkit includes more than 20 pre-trained models and supports 100+ public and custom models. These Intel-developed models are high-quality ingredients that customers can use to reduce time to solution or for exploration or demonstration. With these models, customers can, in many cases, skip the burdensome task of collecting and annotating their own data and training their own models. Instead, they can begin with a pre-trained model, saving significant time and hassle.

We currently provide pre-trained models supporting various tasks such as object detection, classification, semantic segmentation, and re-identification functions. More pre-trained models will be added in the future. Current examples of pre-trained models include age & gender, vehicle detection, emotion recognition, and landmarks regression.

These models can have advantages over typical publicly-available reference networks that can be large and general. These pre-trained models are specific to certain tasks and operating environments. Therefore, they are often smaller, more portable, more performant, and more suitable for the capabilities of the edge devices on which they may be deployed.

Delivering Heterogeneous Tools for AI

In this era of data-centric innovation, computer vision capabilities like object detection, re-identification, and classification are occurring wherever data lives, from edge devices to the cloud, and on a wide variety of hardware. Tools like the Intel Distribution of OpenVINO toolkit and its pre-trained models are essential to helping more people develop and deploy visual AI wherever it will deliver value. We are excited to continue our work to enable the ecosystem to create the next generation of cutting-edge computer vision and deep learning vision solutions.

Learn More:

1The open source version of the toolkit supports Intel® Processors and Intel® Processor Graphics. The Intel Distribution of OpenVINO toolkit supports plugins for CPUs, GPUs, Intel Movidius VPUs and Intel FPGAs and the OpenVX library. For more details visit 01.org/openvinotoolkit. 
Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.
Intel, the Intel logo, OpenVINO, Movidius, Core, Xeon, and Intel Atom are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
© Intel Corporation