

OpenVINO Tutorial
OpenVINO Tutorial
This tutorial is designed to teach you about Intel's OpenVINO toolkit which works with various hardware platforms to boost deep learning. This tutorial is helpful for beginners as well as experienced developers. Here, you will find step-by-step instructions with practical examples.
What is OpenVINO?
OpenVINO is an open-source software toolkit developed to optimize and deploy AI models. It stands for Open Visual Inference and Neural Network Optimization. It allows developers to build scalable and efficient AI-driven Solutions using only a couple of lines of code.
OpenVINO is a cross-platform toolkit written in C++ and developed by Intel Corporation in 2018. Under the Apache License 2.0, the software is free to use. The OpenVINO Toolkit repository on GitHub is github.com/openvinotoolkit/openvino.
There are two main components of OpenVINO including Interface engine and Model Optimizer. Along with these two main components OpenVINO also has a pre-trained model library.
OpenVINO is an AI toolkit intended to let "Write Once, Deploy Anywhere".
OpenVINO Model Formats
OpenVINO IR is the standard format used for running inference. It is stored as a pair of files, *.bin and *.xml, which contain the weights and topology, respectively. This format is obtained by converting a model from one of the supported frameworks, using either the application's API or a specific converter.
OpenVINO supports the following model formats −
- PaddlePaddle − A deep learning platform developed by Baidu, provides a set of tools and libraries for model training, optimization, and deployment. Mostly suitable for natural language processing tasks.
- ONNX (including formats that may be serialized to ONNX) − ONNX (Open Neural Network Exchange) allows to convert models from one framework to another easily. It is used in model sharing and deployment across multiple platforms.
- PyTorch − A deep learning framework used for developing and training models. It has a user
- TensorFlow − An open-source tool for machine learning that provides a wide range of tools and libraries for model development and deployment. It is also used in deep learning tasks.
Key Features of OpenVINO
The following are the key features of OpenVINO −
1. Model Compression
To run the inference locally you can directly link with OpenVINO runtime or you can use the Model server to server model of OpenVINO from a different server or within the Kubernetes environment.
2. Fast & Scalable Deployment
OpenVINO application is Write Once, Deploy Anywhere this means once you build your model you can deploy it on any hardware platform. There is also flexibility in the programming language and OS as OpenVINO supports Linux, Windows, and MacOS and also offers Python, C++, and C API.
3. Lighter Deployment
OpenVINO is created with less external dependencies reduces the footprint of the application, easing installation and dependency handling. You can reduce the final binary size even more by custom-compiling your specific model.
4. Enhanced App Start-Up Time
If you are looking for a fast start-up then OpenVINO can reduce the latency of the first interface by using the CPU for the initial interface later it switches to other machines once the compiling of the model is done and packed into memory. These compiled models are cached which makes start-up time more better.
Download and Install OpenVINO
Get the Intel Distribution of OpenVINO Toolkit from the Intel website. Currently, if you don’t have an Intel account, you’ll need to create one, log in, and then move forward to the download page. Choose version 2024.3 TLS and enter your verification code.
The following steps are given for Ubuntu 16.04 on a 64-bit OS −
Step 1: Extract the installation Package
- Open a cmd prompt window
- Move to the directory where you downloaded the Intel Distribution of OpenVINO Toolkit for Linux. If you saved the package in your Downloads folder, run : cd ~/Downloads/ The file is typically named l_openvino_toolkit_p_<version>.tgz.
- Extract the .tgz file:tar -xvzf l_openvino_toolkit_p_.tgz
Step 2: Start Installing OpenVINO
Open a terminal, move to the folder created after extraction, and run the installation command.
Choice 1 − Use the GUI Installation Wizard: sudo ./install_GUI.sh
Choice 2 − Follow the Command-Line Instructions: sudo ./install.sh
When installed as root the default installation directory for OpenVINO is /opt/intel/openvino_<version>/
Complete the installation process, indicating that OpenVINO has been installed.
Step 3: Install External Software Requirements
Note: If you install OpenVINO in a directory other than the default, replace /opt/intel with the path to your installation directory.
The required dependencies are −
- Intel-optimized build of OpenCV library
- Deep Learning Inference Engine
- Deep Learning Model Optimizer tools
1. Change to the install_dependencies path: cd /opt/intel/openvino/install_dependencies
2. Run a code to download and install the external software requirements: sudo -E ./install_openvino_dependencies.sh
Step 4: Set Environment Variables
Before you compile and run the OpenVINO application, you need to update multiple environment variables.
Use these instructions to set the env variables −
- Open the .bashrc file in <user_directory>: vi /.bashrc
- Add these commands at the end of the file: source /opt/intel/openvino/bin/setupvars.sh
- Save and close the file by pressing the Esc key and Enter: wq
- To test your change, Open a new terminal. you'll notice: [setupvars.sh] OpenVINO environment is ready.
The configuration is successful.
Step 5: Set Up the Model Optimizer
- Move to the prerequisites path of Model Optimizer: cd/opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
- Execute these codes to set the Model Optimizer for Caffe, TensorFlow, MXNet, Kaldi, and ONNX: $ sudo ./install_prerequisites.sh
You can also run the specific version as required −
- Kaldi − sudo ./install_prerequisites_kaldi.sh
- MXNet − sudo ./install_prerequisites_mxnet.sh
- TensorFlow − sudo ./install_prerequisites_tf.sh
- ONNX − sudo ./install_prerequisites_onnx.sh
- Caffe − sudo ./install_prerequisites_caffe.sh
Prerequisites to Learn OpenVINO
To learn OpenVINO, you need to know the fundamentals of a programming language, specifically Python or C++. You should also have an intermediate understanding of deep learning concepts, including neural networks, to understand how OpenVINO works.
Who can Learn OpenVINO?
OpenVINO can be learned by anyone, including data scientists, software developers, and AI/ML engineers. Students in fields such as AI, machine learning, deep learning, natural language processing, or computer vision can also benefit, as they gain valuable experience in model optimization.
OpenVINO FAQs
1. What is the OpenVINO toolkit used for?
The OpenVINO toolkit is used to develop and deploy machine learning models. They allow users to write once, and deploy anywhere.
2. What is the function of OpenVINO?
The OpenVINO is a software toolkit developed by Intel corporation that helps to enhance and optimize AI models.
3. Where to download OpenVINO?
To download OpenVINO you need to go to the OpenVINO Toolkit Download page. Select your operating system and download the appropriate version.
4. Does OpenVINO work on AMD?
The OpenVINO toolkit officially supports only Intel hardware.
5. Which operating systems are supported by OpenVINO?
OpenVINO supports Linux, Windows, and MacOS. It also allows you to use your preferred programming language.
6. Is OpenVINO an open-source platform?
Yes, OpenVino is an open-source deep learning software toolkit developed by Intel Corporation in 2018.
7. Does OpenVINO support edge devices?
Yes, OpenVINO supports edge devices, allowing high-performing AI processing on hardware powered by Intel.
8. What types of hardware does OpenVINO support?
Intel CPUs, Intel GPUs, Intel VPUs, Intel FPGAs, and Intel Neural Compute Stick are the hardware that OpenVINO supports.