如果直接用 Caffe2 进行部署,其实将 Detectron2 导出成 Caffe2 模型就好,可以直接加载使用。我这里导出 ONNX 模型的目的是为下一步用 TensorRT 进行部署做准备;用以验证下一步修改 ONNX 模型是否修改正确。

用 Caffe2 推到 ONNX 模型存在的问题是默认所有操作都是位于 CPU 或 GPU,无法区分哪些操作是必须在 CPU,哪些操作是必须在 GPU。运行时会出现一下错误:

RuntimeError: [enforce fail at operator.cc:274] op. Cannot create operator of type 'CollectRpnProposals' on the device 'CUDA'. Verify that implementation for the corresponding device exist. It might also happen if the binary is not linked with the operator implementation code. If Python frontend is used it might happen if dyndep.InitOpsLibrary call is missing. Operator def: input: "527" input: "528" input: "529" input: "530" input: "531" input: "532" input: "533" input: "534" input: "535" input: "536" output: "rpn_rois.1" name: "" type: "CollectRpnProposals" arg { name: "rpn_max_level" i: 6 } arg { name: "rpn_min_level" i: 2 } arg { name: "rpn_post_nms_topN" i: 1000 } device_option { device_type: 1 device_id: 0 }

模型里的操作大部分是运行在 GPU 上的,只有少部分是必须运行在 CPU 上。因此,我们可以遍历所有操作,把需要运行在 CPU 上的单独设置 device。

代码如下:

import caffe2.python.onnx.backend as backend
import caffe2.python.onnx.backend_rep as backend_rep
import caffe2.proto.caffe2_pb2 as caffe2_pb2
import cv2
import numpy as np
import onnx


def reset_cpu_operator_node_device(net: caffe2_pb2.NetDef):
    cpu_device_option = backend.get_device_option(backend.Device("CPU"))
    for op in net.op:  # type: caffe2_pb2.OperatorDef
        if op.type in ("CollectRpnProposals", "DistributeFpnProposals", "BBoxTransform", "BoxWithNMSLimit"):
            op.device_option.CopyFrom(cpu_device_option)


def inference(onnx_filename: str, image: np.ndarray) -> np.ndarray:
    model = onnx.load(onnx_filename)
    prepared_backend: backend_rep.Caffe2Rep = backend.prepare(model, device="CUDA:0")

    reset_cpu_operator_node_device(prepared_backend.init_net)
    reset_cpu_operator_node_device(prepared_backend.predict_net)

    im_info = np.array([image.shape[2], image.shape[3], 1], dtype=np.float32).reshape((1, 3))
    return prepared_backend.run({model.graph.input[0].name: image, model.graph.input[1].name: im_info})


def main():
    onnx_filename = "model.onnx"
    image_filename = "1000009.jpg"
    image = cv2.imread(image_filename, cv2.IMREAD_COLOR)  # type: np.ndarray
    image = image.transpose((2, 0, 1)).reshape((1, 3, 1024, 1024))
    data = inference(onnx_filename, image)
    print(data)


if __name__ == "__main__":
    main()

  1. TRANSFERING A MODEL FROM PYTORCH TO CAFFE2 AND MOBILE USING ONNX
  2. https://github.com/pytorch/pytorch/blob/v1.7.1/caffe2/proto/caffe2.proto

发表评论

邮箱地址不会被公开。 必填项已用*标注