Model conversion1. Model conversion1.1. CLI: pt → onnx2.2, Python: pt → onnx2. Model predictionCLI usageReferences
According to the test parameters of different formats provided by the Ultralytics team, we can find that the inference performance is best when using TensorRT!
xxxxxxxxxx
When using the export mode of YOLO11 for the first time, some dependencies will be automatically installed. Just wait for it to be completed automatically!
Convert the PyTorch format model to onnx and ncnn
xxxxxxxxxx
cd /ultralytics/ultralytics
xxxxxxxxxx
yolo export model=yolo11n.pt format=onnx dynamic=True #This will enable dynamic axes to avoid static shape inference errors. For example, it supports inputs of different batch sizes.
# yolo export model=yolo11n-seg.pt format=onnx dynamic=True
# yolo export model=yolo11n-pose.pt format=onnx dynamic=True
# yolo export model=yolo11n-cls.pt format=onnx dynamic=True
# yolo export model=yolo11n-obb.pt format=onnx dynamic=True
Convert pt model to onnx model:
xxxxxxxxxx
cd /ultralytics/ultralytics/yahboom_demo
xxxxxxxxxx
python3 model_pt_onnx.py
xxxxxxxxxx
from ultralytics import YOLO
# Load a YOLO11n PyTorch model
model = YOLO("/ultralytics/ultralytics/yolo11n.pt")
# model = YOLO("/ultralytics/ultralytics/yolo11n-seg.pt")
# model = YOLO("/ultralytics/ultralytics/yolo11n-pose.pt")
# model = YOLO("/ultralytics/ultralytics/yolo11n-cls.pt")
# model = YOLO("/ultralytics/ultralytics/yolo11n-obb.pt")
# Export the model to ONNX format
model.export(format="onnx") # This will create 'yolo11n.onnx' in the same directory
Note: The converted model file is located in the converted model file location
xxxxxxxxxx
cd /ultralytics/ultralytics
xxxxxxxxxx
yolo predict model=yolo11n.onnx source=0 save=False show