• Welcome to the world's largest Chinese hacker forum

    Welcome to the world's largest Chinese hacker forum, our forum registration is open! You can now register for technical communication with us, this is a free and open to the world of the BBS, we founded the purpose for the study of network security, please don't release business of black/grey, or on the BBS posts, to seek help hacker if violations, we will permanently frozen your IP and account, thank you for your cooperation. Hacker attack and defense cracking or network Security

    business please click here: Creation Security  From CNHACKTEAM

Recommended Posts

MMDeploy的onnxruntime教程

参考官方教程

A From-scratch Example

下面是一个如何从头开始部署和推断更快的mm检测的流程图模型的示例。

step1: 创建虚拟环境并且安装MMDetection

Create Virtual Environment and Install MMDetection.

请在蟒蛇环境中运行以下命令来安装MMDetection .

conda create-n openmmlab python=3.7-y

城市激活openmmlab

城市安装py torch==1。8 .0火炬视觉==0。9 .0 cuda工具包=10.2-c py torch-y

#安装设备维修和管理创造价值能力

pip安装mmcv-full==1。4 .0-f https://下载。打开mmlab。com/mmcv/dist/Cu 102/torch 1.8/index。超文本标记语言

#安装mm检测

饭桶克隆https://github.com/open-mmlab/mmdetection.git

激光唱片检测

pip安装-r要求/build.txt

管道安装。

step2: 下载MMDetectin中训练好的权重

Download the Checkpoint of Faster R-CNN

从此链接下载检查点,并将其放在{ mm det _ ROOT }/检查点中,其中{MMDET_ROOT}是您的mm检测基本代码的根目录。

step3: 安装MMDeploy和ONNX Runtime

Install MMDeploy and ONNX Runtime

step3-1: 安装MMDeploy

请在蟒蛇环境中运行以下命令来安装MMDeploy .

城市激活openmmlab

饭桶克隆https://github.com/open-mmlab/mmdeploy.git

光盘多媒体部署

饭桶子模块更新-初始化-递归

管道安装-e . #安装MMDeploy

step3-2a: 下载onnxruntime

一旦我们安装了MMDeploy,我们应该为模型推理选择一个推理引擎。这里我们以开放神经网络交换运行时为例。运行以下命令安装ONNX运行时间:

点安装onnxruntime==1.8.1

然后下载开放神经网络交换运行时库,为ONNX运行时间:构建mmdeploy插件

tep3-2b-制作onnxruntime的插件模型转换会需要">step3-2b: 制作onnxruntime的插件(模型转换会需要)
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz
tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
cd onnxruntime-linux-x64-1.8.1
export ONNXRUNTIME_DIR=$(pwd)
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH  # 也可将这两句写进~/.bashrc
cd ${MMDEPLOY_DIR} # To MMDeploy root directory
mkdir -p build && cd build
# build ONNXRuntime custom ops
cmake -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
make -j$(nproc)
step3-2c: build MMDeploy SDK(使用C的接口会用到)
# build MMDeploy SDK
cmake -DMMDEPLOY_BUILD_SDK=ON \
      -DCMAKE_CXX_COMPILER=g++-7 \
      -DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \
      -Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \
      -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
      -DMMDEPLOY_TARGET_BACKENDS=ort \
      -DMMDEPLOY_CODEBASES=mmdet ..
make -j$(nproc) && make install
# build MMDeploy SDK具体案例
cmake -DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \  # 通过apt-get安装的
-Dspdlog_DIR=/usr/lib/x86_64-linux-gnu/cmake/spdlog \
-DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
-DMMDEPLOY_TARGET_BACKENDS=ort \
-DMMDEPLOY_CODEBASES=mmdet ..
# 其中${MMDEPLOY_DIR} ${MMDET_DIR} ${ONNXRUNTIME_DIR}都可以写在 ~/.bashrc里面然后source ~/.bashrc生效
补充: 验证后端和插件是否安装成功
python ${MMDEPLOY_DIR}/tools/check_env.py

step4: Model Conversion

Once we have installed MMDetection, MMDeploy, ONNX Runtime and built plugin for ONNX Runtime, we can convert the Faster R-CNN to a .onnx model file which can be received by ONNX Runtime. Run following commands to use our deploy tools:

# Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR}
# If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console.
python ${MMDEPLOY_DIR}/tools/deploy.py \
    ${MMDEPLOY_DIR}/configs/mmdet/detection/detection_onnxruntime_dynamic.py \
    ${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
    ${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
    ${MMDET_DIR}/demo/demo.jpg \
    --work-dir work_dirs \  # 转换好的模型保存目录
    --device cpu \
    --show \  # 展示使用后端推理框架,和原来pytorch推理的两张图片
    --dump-info  # 输出方便,可用与SDK
    
# 补充    
# ${MMDEPLOY_DIR}和${MMDET_DIR}已经写进了~/.bashrc
# 转换好了模型可以通过python接口进行推理
例如: Inference Model
Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.
from mmdeploy.apis import inference_model
result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)

If the script runs successfully, two images will display on the screen one by one. The first image is the infernce result of ONNX Runtime and the second image is the result of PyTorch. At the same time, an onnx model file end2end.onnx and three json files (SDK config files) will generate on the work directory work_dirs.

step5: Run MMDeploy SDK demo

After model conversion, SDK Model is saved in directory ${work_dir}.
Here is a recipe for building & running object detection demo.

cd build/install/example
# path to onnxruntime ** libraries **
export LD_LIBRARY_PATH=/path/to/onnxruntime/lib
# 例子: export LD_LIBRARY_PATH=/home/zranguai/Deploy/Backend/ONNXRuntime/onnxruntime-linux-x64-1.8.1/lib
mkdir -p build && cd build
cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \
      -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
make object_detection
# 例子:
# cmake -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
#       -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
# suppress verbose logs
export SPDLOG_LEVEL=warn
# running the object detection example
./object_detection cpu ${work_dirs} ${path/to/an/image}
# 例子: ./object_detection cpu ${MMDEPLOY_DIR}/work_dirs ${MMDET_DIR}/demo/demo.jpg

If the demo runs successfully, an image named "output_detection.png" is supposed to be found showing detection objects.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now