[OpenVINO™ Getting Started] Install and Configure OpenVINO from Scratch (English)

userHead LattePanda 2020-08-31 14:23:47 9944 Views0 Replies

Hello, fellow panda-ers!

Here, we have a guide on how to install and configure OpenVINO onto your LattePanda (2nd Gen) single board computer, courtesy of DFRobot community user K2dCnC_w. This article serves as an English translation repost of his original article, which can be found here. Please give credit to the original author when reposting this article.


A brief introduction:
I actually didn’t know much about increasing the speed and deployment of edge computing neural networks before. With the help of my teammates, I gradually learned some usages of OpenVINO and neural computing sticks. Due to inexperience, I had to work hard to overcome many hurdles during this time. Here I will share the procedures to install and run OpenVINO here with built-in examples. This article mainly refers to the installation tutorial of OpenVINO from its official website [ ... ndows.html]. From the perspective of Xiaobai (which is me, no?), this guide will discuss how to install OpenVINO onto a Win10 system!
First of all, OpenVINO itself requires the three dependencies of CMAKE, Python and VS. It is an optimization and deployment tool for neural network models. In short, the model itself is relatively large and may not be suitable for running on edge computing devices. This toolkit can transform it from a model of various frameworks into an Intermediate Representation (IR) of a network, which is used to describe this model. It consists of two parts:
①.xml file for the description of the topology of the network
②.bin file for the binary data of the network weights and biases
The reasoning engine uses a set of general-purpose CPUs, which can perform operations on CPUs, GPUs and VPUs. I listened to OpenVINO's lectures and found that this can also be regarded as a highlight of this tool suite, making it more convenient to use. We only need to select the running device as the VPU during inference, and then we can automatically deploy the network to such hardware-level acceleration units for efficient inference. I heard that the power consumption of this Movidius neural computing stick is only about 1w, so the power efficiency rating is still quite amazing.
This kind of low-cost edge computing neural network reasoning engine is very suitable for programming enthusiasts and creators who want to make some small projects that require neural networks.

Below we will start to discuss the installation method:
1. Download and install the Win10 release of the OpenVINO installation package
This toolkit needs to be downloaded at [ ... id=6430695]. It is a free toolkit , but Intel requires users to register by email. We follow the prompts to select [Register & Download] on the page, enter the registration page, fill in the email, and choose to download the Win10 version. Next, we try our best to choose the default directory in the next part of the installation process to proceed with the installation, and check all the components that can be included for installation. After the installation is complete, it may be prompted that some installation items are missing. These do not matter, so we continue to click [Finish] to complete.

2. Download and install a specific version of python [note the version requirements!!]
Download the python file from the Win10 version download page of the python official website [] and proceed with the installation. OpenVINO requires the python version to be 3.5-3.7. Don’t choose too high a version. Before, I had been using the latest version 3.8.x, and it always reported an error will when I tried to run OpenVINO. Now, the process is the same with installing python. Remember to also check the option to add python to the path, otherwise it will not be found during runtime.

3. Download and install VS2019 and CMake 3.14 [Choose the right workload and version!! Try to install in the default directory!!]
The official website's two dependency requirements are CMake2.8.12 or higher and VS2017 or 2019, but if you want to use VS2019, you must use CMake3.14. Download and install normally. However, there is one point worth noting, that is when VS is installing, you need to make sure that these three workloads are checked: [.NET desktop development] [Desktop development using C++] [General windows platform development]. The author [me] only checked one item at the beginning, which led to an error being reported at the command line when verifying whether or not OpenVINO had installed properly.

4. Determine the system hardware requirements [see if some processors do not support it!!]
The supported hardware listed on the official website includes Core, Xeon and Neural Compute Stick after the sixth generation, but when measuring its ability to work on the LattePanda Delta, it reported an error. The online explanation for this problem is that the AVX512 instruction set is not supported. I couldn't find a solution to this problem when searching the web. However, the device could finally run normally after switching from the CPU to the Neural Compute Stick. It may be because this LattePanda Delta uses the N4100 processor, which is not among the officially supported N4200/5, N3350/5 or N3450/5. There is no such problem when running the software on my PC (i7-8750H).

5. Set environment variables and configure model optimizer
Only after finishing up setting the temporary system environment variables can you proceed to the next step. Run C:\Program Files(x86)\IntelSWTools\openvino\bin\setupvars.bat. Later, you can manually set permanent environment variables according to the tutorial on the official website.
After that, you need to set up the model optimizer. The model optimizer is a python file. Input the model files of various frameworks or the general model file format of ONNX, and output it as an intermediate representation file [IR] for further deployment and reasoning. At present, it is understood that frameworks such as Caffe and Tensorflow directly support conversion, and others such as pytorch need to be converted to ONNX before model optimization.
During installation, you can choose to configure the optimizer of all models at once or configure the optimizer for a model separately, and then run install_prerequisites.bat or install_prerequisites_tf.bat and other files in the C:\Program Files(x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\install_prerequisites directory.
When configuring the optimizer for tensorflow, the software reported an error that the tensorflow version does not match. Once you uninstall tensorflow and run this batch file, it automatically installs and can be used normally. It is recommended to replace pip with a domestic mirror to speed up the download speed.

6. Verification steps
The verification part also only needs to process two batch files. Run demo_squeezenet_download_convert_run.bat in the inference engine directory C:\Program Files(x86)\IntelSWTools\openvino\deployment_tools\demo\ to perform graphical classification verification, and run demo_security_barrier_camera.bat to perform license plate recognition, where both models need to be downloaded and installed again, which may take some time.

7. Directory error in BAT
I just mentioned that when downloading and installing dependencies, try to install in the default directory, but since the LattePanda Delta onboard storage is only 32G, I finally decided to add a 128G solid state and installed the software in a non-default location, but when I clicked on the BAT file, I discovered that these very directories were all written in the default location, and the problem was solved after forcibly modifying the directories in the BAT file itself.

Learning Resources:
In addition, the author recently discovered that the OpenVINO Chinese community also has an account in bilibili [], which contains a lot of video-based hands-on installation instructions and demonstrations of the landing of some projects. There have been stable updates recently. You can also pay attention to the relevant content of the OV community.