Skip to content

Commit

Permalink
Refine repo docs (readme, license, etc) (#43)
Browse files Browse the repository at this point in the history
* Move license file to root folder.

Signed-off-by: Pan, Yanjie <yanjie.pan@intel.com>

* Remove model folder since model files are usually too large to push. Will upload models in storage.

Signed-off-by: Pan, Yanjie <yanjie.pan@intel.com>

* Rewrite root README and refine other READMEs.

Signed-off-by: Pan, Yanjie <yanjie.pan@intel.com>

* Refine LICENSE.md.

Signed-off-by: Pan, Yanjie <yanjie.pan@intel.com>

* Refine root README again.

Signed-off-by: Pan, Yanjie <yanjie.pan@intel.com>

* Refine contact info in root readme.

* Refine ivsr_gpu_opt/README.md

* Refine subfolder readme.

---------

Signed-off-by: Pan, Yanjie <yanjie.pan@intel.com>
  • Loading branch information
YanjiePa authored Apr 28, 2023
1 parent 30674b5 commit 827c444
Show file tree
Hide file tree
Showing 6 changed files with 112 additions and 76 deletions.
4 changes: 3 additions & 1 deletion license/BSD3_license → LICENSE.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
BSD 3-Clause License
# BSD 3-Clause License

Copyright (c) 2023, Intel Corporation
All rights reserved.
Expand All @@ -17,6 +17,8 @@ modification, are permitted provided that the following conditions are met:
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

DISCLAIMER

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
Expand Down
35 changes: 25 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,32 @@
# Intel Video Super Resolution (iVSR)
# Enhanced BasicVSR (iVSR)

iVSR is Intel's video super resolution (VSR) solution that enables VSR inference on Intel's CPU/GPU
with enhanced quality and optimized performance.
Video super resolution (VSR) is widely used in AI media enhancement domain to
convert low-resolution video to high-resolution.

## iVSR GPU Opt
This folder enables the inference of BasicVSR (an AI-based VSR algorithm) on Intel CPU and Intel GPU Flex series 170, aka. ATS-M1 150W.
with OpenVINO.

Please check [iVSR_GPU_OPT_README](./ivsr_gpu_opt/README.md) for more details.
BasicVSR is a public AI-based VSR algorithm.
For details of public BasicVSR, check out the [paper](https://arxiv.org/pdf/2012.02181.pdf).

## iVSR FFmpeg plugin
This folder enables to do BasicVSR inference using FFmpeg with OpenVINO as backend.
We have enhanced the public model to achieve better visual quality and less computational complexity.
The performance of BasicVSR inference has also been optimized for Intel GPU.
Now, 2x Enhanced BasicVSR can be run on both Intel CPU and Intel Data Center GPU Flex 170 (*aka* ATS-M1 150W) with OpenVINO and FFmpeg.

Please check [iVSR_FFmpeg_plugin_README](./ivsr_ffmpeg_plugin/README.md) for more details.

## How to evaluate

Please expect `pre-production` quality of current solution.

### Get models
Please [contact us](mailto:yanjie.pan@intel.com) for FP32/INT8 Enhanced BasicVSR models. Will provide links to download the models soon.

### Run with OpenVINO
Refer to the guide [here](ivsr_gpu_opt/README.md).

### Run with FFmpeg
Refer to the guide [here](ivsr_ffmpeg_plugin/README.md).


## License

Enhanced BasicVSR is licensed under the BSD 3-clause license. See [LICENSE](LICENSE.md) for details.

107 changes: 62 additions & 45 deletions ivsr_ffmpeg_plugin/README.md
Original file line number Diff line number Diff line change
@@ -1,50 +1,62 @@
# iVSR FFmpeg plugin
This folder enables to do BasicVSR inference using FFmpeg with OpenVINO as backend. The inference backend is OpenVINO 2022.1, and the FFmpeg version is 5.1. We've implemented some patches on the OpenVINO 2022.1 and FFmpeg 5.1 to make to pipeline able to work.
The patches for FFmpeg are located in folder 'patches'.
The folder `ivsr_ffmpeg_plugin` enables to do BasicVSR inference using FFmpeg with OpenVINO as backend. The inference is validated with OpenVINO 2022.1 and FFmpeg 5.1. We've implemented some patches on the OpenVINO and FFmpeg to enable the pipeline.
The folder `patches` includes our patches for FFmpeg.

## Prerequisites:
## Prerequisites
The FFmpeg plugin is validated on:
- Intel Xeon hardware platform. Intel® Data Center GPU Flex Series is optional (Validated on Flex 170, also known as ATS-M1 150W)
- Host OS: Linux based OS (Tested and validated on Ubuntu 20.04)
- Intel Xeon hardware platform
- (Optional) Intel Data Center GPU Flex 170(*aka* ATS-M1 150W)
- Host OS: Linux based OS (Ubuntu 20.04)
- Docker OS: Ubuntu 20.04
- kernel: 5.10.54 (Optional to enable Intel® Data Center GPU Flex Series accelerator)
- cmake
- make
- git
- docker

## Build Docker Image:
Before building docker image, you need to have installed git and docker on your system, and make sure you have access to the openvino and ffmpeg public repo to pull their source code.
Below is the sample command line which is used for our validation.
## Build Docker Image
Before building docker image, make sure you have access to the public repo of OpenVINO and FFmpeg.
The following is the sample command line used for building docker image.
- Set up docker service
```
sudo mkdir -p /etc/systemd/system/docker.service.d
printf "[Service]\nEnvironment=\"HTTPS_PROXY=$https_proxy\" \"NO_PROXY=$no_proxy\"\n" | sudo tee /etc/systemd/system/docker.service.d/proxy.conf
sudo systemctl daemon-reload
sudo systemctl restart docker
```
- Pull the source code of OpenVINO and FFmpeg with git
```
cd <ivsr-local-folder>
git submodule init
git submodule update --remote --recursive
```
- Build docker image
```
cd ivsr_ffmpeg_plugin
sudo docker build -f Dockerfile -t ffmpeg-ov1 ..
```
If the docker building process is successful, you can find a docker image name `ffmpeg-ov1:latest` with command `docker images`.
If the image is built successfully, you can find a docker image named `ffmpeg-ov1:latest` with command `docker images`.

## Start docker container and set up BasicVSR OpenVINO environment
When the docker image is built successfully, you can start a container and set up the OpenVINO environment as below. Please be noted the `--shm-size=128g` is necessary because big amount of share memory will be requested from the ffmpeg inference filter.

docker run -itd --name ffmpeg-ov1 --privileged -e MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms:9000000000,muzzy_decay_ms:9000000000" --shm-size=128g ffmpeg-ov1-new:latest bash
docker exec -it ffmpeg-ov1 /bin/bash
source /workspace/ivsr/ivsr_gpu_opt/based_on_openvino_2022.1/openvino/install/setupvars.sh
ldconfig
The above command lines will start a docker container named `ffmpeg-ov1`
## Start docker container and set up BasicVSR inference environment
When the docker image is built successfully, you can start a container and set up the OpenVINO environment for BasicVSR inference as below. Please note that `--shm-size=128g` is necessary because a large amount of share memory will be requested by the FFmpeg inference filter.
```
docker run -itd --name ffmpeg-ov1 --privileged -e MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms:9000000000,muzzy_decay_ms:9000000000" --shm-size=128g ffmpeg-ov1-new:latest bash
docker exec -it ffmpeg-ov1 /bin/bash
source /workspace/ivsr/ivsr_gpu_opt/based_on_openvino_2022.1/openvino/install/setupvars.sh
ldconfig
```
The above command lines will start a docker container named `ffmpeg-ov1`.
## How to run BasicVSR inference with FFmpeg-plugin
- Use FFmpeg-plugin to run BasicVSR inference
```
cd /workspace/ivsr/ivsr_ffmpeg_plugin/ffmpeg
./ffmpeg -i <your test video> -vf dnn_processing=dnn_backend=openvino:model=<your model.xml>:input=input:output=output:nif=3:backend_configs='nireq=1&device=CPU' test_out.mp4
```

## Use FFmpeg-plugin to run pipeline with BasicVSR
cd /workspace/ivsr/ivsr_ffmpeg_plugin/ffmpeg
./ffmpeg -i <your test video> -vf dnn_processing=dnn_backend=openvino:model=<your model.xml>:input=input:output=output:nif=3:backend_configs='nireq=1&device=CPU' test_out.mp4
- Work modes of FFmpeg-plugin for BasicVSR

## Work modes of FFmpeg-plugin for BasicVSR
The FFmpeg decoder and encoder work modes are similar to its based 5.1 version. Options for decoder and encoder, including command line format, are not changed.
Only the options of video filter 'dnn_processing' are introduced here.
Decoder and encoder work similar to FFmpeg 5.1. Options and command line formats are not changed.
Only the options of video filter `dnn_processing` are introduced.

|AVOption name|Description|Default value|Recommended value(s)|
|:--|:--|:--|:--|
Expand All @@ -55,29 +67,34 @@ Only the options of video filter 'dnn_processing' are introduced here.
|input|input name of the model|NULL|input|
|output|output name of the model|NULL|output|
|backend_configs:nireq|number of inference request|2|1 or 2 or 4|
|backend_configs:device|Device for inference task|CPU|CPU or GPU|
|backend_configs:device|device for inference task|CPU|CPU or GPU|

Apart from the common AVOptions which can be set in the normal FFmpeg command line with format `AVOption=value`, there are two options `nireq` and `device` to be set with the `backend_configs` option. The command format is `backend_configs='nireq=value&device=value'` when you want to set both of them.

Apart from the common AVOptions which can be set in the normal FFmpeg command line with *AVOption=value* format, there are two options *nireq* and *device* to be set with the *backend_configs* option. The command format is *backend_configs='nireq=value&device=value` when you want to set both of them.
- Inference model precision and custom operation supportive

## Inference model precision and custom operation supportive
OpenVINO models with different precisions can all be supported by the FFmpeg-plugin with OpenVINO as backend. In this release, we have validated FP32 and INT8 models which are converted and quantized by the model optimizer from OpenVINO 2022.1.
The custom operation is a great feature of OpenVINO which allows user to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This FFmpeg-plugin can support a BasicVSR model which utilizes custom operation. There is no additional option required for the command line in the case you have set the right path of the model which has custom op. Please be noted the dependent files are located in the docker container folder `/workspace/ivsr/ivsr_gpu_opt/based_on_openvino_2022.1/openvino/flow_warp_custom_op` and `/workspace/ivsr/ivsr_gpu_opt/based_on_openvino_2022.1/openvino/bin/intel64/Release/lib`, and don't make any changes for these two folder.
OpenVINO models with different precisions can be supported by the FFmpeg-plugin. In this release, we have validated FP32 and INT8 models which are converted and quantized by the model optimizer of OpenVINO 2022.1.

## Optional: Intel® Data Center GPU Flex Series Supportive
The default docker image doesn't include Intel® Data Center GPU Flex Series driver installed, so you may not be able to accelerate inference with GPU in the ffmpeg pipeline. You may start a docker container by the above steps, and install the driver by yourself in the docker container.
Below is the selected components during our installation. Some indications may be different, but you can reference this component selection.
The [Custom OpenVINO™ Operations](https://docs.openvino.ai/latest/openvino_docs_Extensibility_UG_add_openvino_ops.html) is a great feature of OpenVINO which allows users to support models with operations that OpenVINO does not support out-of-the-box.
This FFmpeg-plugin supports BasicVSR which utilizes custom operations. There is no additional options required for the command line in the case you have set the right path of the model. Please be noted that the dependent files are located in the docker container folder `/workspace/ivsr/ivsr_gpu_opt/based_on_openvino_2022.1/openvino/flow_warp_custom_op` and `/workspace/ivsr/ivsr_gpu_opt/based_on_openvino_2022.1/openvino/bin/intel64/Release/lib`, do not make any changes to them.

Do you want to update kernel xx.xx.xx ?
'y/n' default is y:n
Do you want to install mesa ?
'y/n' default is y:n
Do you want to install media ?
'y/n' default is y:y
Do you want to install opencl ?
'y/n' default is n:y
Do you want to install level zero ?
'y/n' default is n:n
Do you want to install ffmpeg ?
'y/n' default is n:n
Do you want to install tools ?
'y/n' default is n:n
## (Optional) Intel® Data Center GPU Flex Series Supportive
The default docker image doesn't include Intel® Data Center GPU Flex Series driver, so you may not be able to accelerate inference with GPU in the FFmpeg pipeline. You can start a docker container by the above steps and install the driver in the docker container.

Below is the selected components during the driver installation. Some indications may be different, but you can reference this component selection.
```
Do you want to update kernel xx.xx.xx ?
'y/n' default is y:n
Do you want to install mesa ?
'y/n' default is y:n
Do you want to install media ?
'y/n' default is y:y
Do you want to install opencl ?
'y/n' default is n:y
Do you want to install level zero ?
'y/n' default is n:n
Do you want to install ffmpeg ?
'y/n' default is n:n
Do you want to install tools ?
'y/n' default is n:n
```
9 changes: 5 additions & 4 deletions ivsr_gpu_opt/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# iVSR GPU Opt
This folder enables the inference of BasicVSR (an AI-based VSR algorithm) on Intel CPU and Intel GPU Flex series 170 (aka. ATS-M1 150W)
In this folder, optimization patches are provided to run the inference of Enhanced BasicVSR on Intel CPU and
Intel Data Center GPU Flex 170 (*aka* ATS-M1 150W)
with OpenVINO.

| Subfolder | Description |
|---------| ----------- |
| based_on_openvino_2022.1 | Optimization patches to OpenVINO 2022.1.0. Check [this](./based_on_openvino_2022.1/README.md) for more details on how to use. |
| Subfolder | Description | How to use |
|---------| ----------- | ---- |
| based_on_openvino_2022.1 |Optimization patches to OpenVINO 2022.1.0. | Check [this](./based_on_openvino_2022.1/README.md) for more details. </br> Please [contact us](mailto:yanjie.pan@intel.com) for FP32/INT8 models. </br> Will provide links to download the models soon. |
33 changes: 17 additions & 16 deletions ivsr_gpu_opt/based_on_openvino_2022.1/README.md
Original file line number Diff line number Diff line change
@@ -1,46 +1,47 @@
# iVSR BasicVSR Sample
# iVSR GPU Optimization for OpenVINO 2022.1

This folder enables BasicVSR inference on Intel CPU/GPU using OpenVINO as backends.
This folder `based_on_openvino_2022.1` enables BasicVSR inference on Intel CPU/GPU using OpenVINO 2022.1 as backend.

## Prerequisites:
- Intel Xeon hardware platform. Intel® Data Center GPU Flex Series is optional (Validated on Flex 170, also known as ATS-M1 150W)
- Host OS: Linux based OS (Tested and validated on Ubuntu 20.04)
- kernel: 5.10.54 (Optional to enable Intel® Data Center GPU Flex Series accelerator)
- Intel Xeon hardware platform
- (Optional) Intel Data Center GPU Flex 170(*aka* ATS-M1 150W)
- OS: Ubuntu 20.04
- kernel: 5.10.54
- cmake: 3.16.3
- make: 4.2.1
- git: 2.25.1
- gcc: 9.4.0

If they are not available on your system, please install them.

## Set up BasicVSR OpenVINO Environment
You can quickly set up the OpenVINO environment for BasicVSR inference by `build.sh`. Below is a command line example:
```bash
./build.sh <none or Release or Debug >
./build.sh <none or Release or Debug>
```

## BasicVSR Inference Sample
There is a C++ sample to perform BasicVSR inference on OpenVINO backend. You can reach the sample code and the executable file after you set up the envirnonment successfully.
There is a C++ sample to perform BasicVSR inference on OpenVINO backend. You can reach the sample code and the executable file after you set up the envirnonment successfully.

You can run `sh <PATH_TO_OPENVINO_PROJECT>/bin/intel64/Release/basicvsr_sample -h` to get help messages and see the default settings of parameters.
You can run `<PATH_TO_OPENVINO_PROJECT>/bin/intel64/Release/basicvsr_sample -h` to get help messages and see the default settings of parameters.

|Option name|Desciption|Default value|Recommended value(s)|
|:--|:--|:--|:--|
|h|Print Help Message|||
|cldnn_config|Optional. Path of CLDNN config for Intel GPU.|None|< Path to OpenVINO >/flow_warp_custom_op/flow_warp.xml|
|cldnn_config|Need. Path of CLDNN config.|None|< Path to OpenVINO >/flow_warp_custom_op/flow_warp.xml|
|data_path|Need. Input data path for inference.|None||
|device|Optional. Device to perform inference.|CPU|CPU or GPU|
|extension|Optional. Extension (.so or .dll) path of custom operation.|None|< Path to basicvsr_sample>/lib/libcustom_extension.so|
|model_path|Need. Path of BasicVSR OpenVINO IR model(.xml).|None||
|model_path|Need. Path of BasicVSR OpenVINO IR model (.xml).|None||
|nif|Need. Number of input frames for each inference.|3|3|
|patch_evalution|Optional. Whether to crop the original frames to smaller patches for evaluation.|false|false|
|save_path|Optional. Path to save predictions.|./outputs||
|save_predictions|Optional. Whether to save the results to save_path.|true||
|save_predictions|Optional. Whether to save the results to save_path.||If this option exists, results will be saved.|

Please note that all the pathes specified options should exist and do not end up with '/'. Below is an example to run BasicVSR inference:
Please note that all the pathes specified options should exist and do not end up with '/'. Here are some examples to run BasicVSR inference:
```bash
# Run the inference evaluation on CPU
<PATH_TO_OPENVINO_PROJECT>/bin/intel64/basicvsr_sample --model_path=<IR model path(.xml)> --extension=<PATH_TO_OPENVINO_PROJECT>/bin/intel64/lib/libcustom_extension.so --data_path=<Directory path including input frames> --nif=<Number of input frames> --device=CPU --save_predictions --save_path=<Directory path to save results> --cldnn_config=<PATH_TO_OPENVINO_PROJECT>/flow_warp_custom_op/flow_warp.xml
<PATH_TO_OPENVINO_PROJECT>/bin/intel64/basicvsr_sample -model_path=<IR model path(.xml)> -extension=<PATH_TO_OPENVINO_PROJECT>/bin/intel64/lib/libcustom_extension.so -data_path=<Directory path including input frames> -nif=<Number of input frames> -device=CPU -save_predictions -save_path=<Directory path to save results> -cldnn_config=<PATH_TO_OPENVINO_PROJECT>/flow_warp_custom_op/flow_warp.xml

# Run the inference evaluation on GPU
<PATH_TO_OPENVINO_PROJECT>/bin/intel64/basicvsr_sample -model_path=<IR model path(.xml)> -extension=<PATH_TO_OPENVINO_PROJECT>/bin/intel64/lib/libcustom_extension.so -data_path=<Directory path including input frames> -nif=<Number of input frames> -device=GPU -save_predictions -save_path=<Directory path to save results> -cldnn_config=<PATH_TO_OPENVINO_PROJECT>/flow_warp_custom_op/flow_warp.xml
```


Expand Down
Empty file.

0 comments on commit 827c444

Please sign in to comment.