Object Detection1. Model Introduction2. Start2.1. Enter docker2.2. Target prediction: imageEffect preview2.3, Target prediction: VideoEffect preview2.4, Target Prediction: Real-time DetectionEffect PreviewReferences
Use Python to demonstrate the effect of Ultralytics: Object Detection in image, video, and real-time detection.
Object detection is a task that involves identifying the location and category of objects in an image or video stream.
The output of the object detector is a set of bounding boxes surrounding the objects in the image, as well as the class label and confidence score of each bounding box. If you need to identify objects of interest in a scene, but do not need to know the specific location or exact shape of the object, then object detection is a good choice.
Run YOLOv11's docker script
xxxxxxxxxxsh ~/yolov11_dcoker.sh
Use yolo11n.pt to predict the pictures that come with the ultralytics project.
Enter the code folder:
xxxxxxxxxxcd /ultralytics/ultralytics/yahboom_demo
Run the code:
xxxxxxxxxxpython3 01.detection_image.py
Yolo recognition output image location: /ultralytics/ultralytics/output/
1. View using jupyter lab
Open another terminal to enter the docker container and use jupyter lab to view the image
xxxxxxxxxxdocker ps -a

xxxxxxxxxxdocker exec -it be79bf10e970 /bin/bash #Container ID needs to be modified according to the actual one you findcd /ultralyticsjupyter lab --allow-root
Access directly through http://localhost:8080/ in the system browser:
xxxxxxxxxxhttp://localhost:8080/

2. Copy the file to the host machine for viewing
Enter the following command in the host terminal
xxxxxxxxxxdocker cp be79bf10e970:/ultralytics/ultralytics/output/ /home/jetson/ultralytics/ultralytics/ #Container ID needs to be modified according to the actual one you find

Sample code:
xxxxxxxxxxfrom ultralytics import YOLO# Load a modelmodel = YOLO("/ultralytics/ultralytics/yolo11n.pt")# Run batched inference on a list of imagesresults = model("/ultralytics/ultralytics/assets/bus.jpg") # return a list of Results objects# Process results listfor result in results: boxes = result.boxes # Boxes object for bounding box outputs # masks = result.masks # Masks object for segmentation masks outputs # keypoints = result.keypoints # Keypoints object for pose outputs # probs = result.probs # Probs object for classification outputs # obb = result.obb # Oriented boxes object for OBB outputs result.show() # display to screen result.save(filename="/ultralytics/ultralytics/output/bus_output.jpg") # save to diskUse yolo11n.pt to predict the video under the ultralytics project (not the video that comes with ultralytics).
Enter the code folder:
xxxxxxxxxxcd /ultralytics/ultralytics/yahboom_demo
Run the code:
xxxxxxxxxxpython3 01.detection_video.py
Video location of yolo recognition output: /ultralytics/ultralytics/output/
The output video will be displayed in real time during the code running process. If you want to view the video later, you can refer to the above [2. Copy the file to the host machine for viewing] tutorial operation.

Sample code:
xxxxxxxxxximport cv2from ultralytics import YOLO# Load the YOLO modelmodel = YOLO("/ultralytics/ultralytics/yolo11n.pt")# Open the video filevideo_path = "/ultralytics/ultralytics/videos/people_animals.mp4"cap = cv2.VideoCapture(video_path)# Get the video frame size and frame rateframe_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))fps = int(cap.get(cv2.CAP_PROP_FPS))# Define the codec and create a VideoWriter object to output the processed videooutput_path = "/ultralytics/ultralytics/output/01.people_animals_output.mp4"fourcc = cv2.VideoWriter_fourcc(*'mp4v') # You can use 'XVID' or 'mp4v' depending on your platformout = cv2.VideoWriter(output_path, fourcc, fps, (frame_width, frame_height))# Loop through the video frameswhile cap.isOpened(): # Read a frame from the video success, frame = cap.read() if success: # Run YOLO inference on the frame results = model(frame) # Visualize the results on the frame annotated_frame = results[0].plot() # Write the annotated frame to the output video file out.write(annotated_frame) # Display the annotated frame cv2.imshow("YOLO Inference", annotated_frame) # Break the loop if 'q' is pressed if cv2.waitKey(1) & 0xFF == ord("q"): break else: # Break the loop if the end of the video is reached break# Release the video capture and writer objects, and close the display windowcap.release()out.release()cv2.destroyAllWindows()Use yolo11n.pt to predict the USB camera screen.
Enter the code folder:
xxxxxxxxxxcd /ultralytics/ultralytics/yahboom_demo
Run the code: Click the preview screen, press the q key to terminate the program!
xxxxxxxxxxpython3 01.detection_camera_usb.py
Yolo recognizes the output video location: /ultralytics/ultralytics/output/
The camera screen will be displayed in real time during the code running process. If you want to view the output video later, you can refer to the above [2. Copy the file to the host machine for viewing] tutorial operation.

Sample code:
xxxxxxxxxximport cv2from ultralytics import YOLO# Load the YOLO modelmodel = YOLO("/ultralytics/ultralytics/yolo11n.pt")# Open the cammeracap = cv2.VideoCapture(0)# Get the video frame size and frame rateframe_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))fps = int(cap.get(cv2.CAP_PROP_FPS))# Define the codec and create a VideoWriter object to output the processed videooutput_path = "/ultralytics/ultralytics/output/01.detection_camera_usb.mp4"fourcc = cv2.VideoWriter_fourcc(*'mp4v') # You can use 'XVID' or 'mp4v' depending on your platformout = cv2.VideoWriter(output_path, fourcc, fps, (frame_width, frame_height))# Loop through the video frameswhile cap.isOpened(): # Read a frame from the video success, frame = cap.read() if success: # Run YOLO inference on the frame results = model(frame) # Visualize the results on the frame annotated_frame = results[0].plot() # Write the annotated frame to the output video file out.write(annotated_frame) # Display the annotated frame cv2.imshow("YOLO Inference", annotated_frame) # Break the loop if 'q' is pressed if cv2.waitKey(1) & 0xFF == ord("q"): break else: # Break the loop if the end of the video is reached break# Release the video capture and writer objects, and close the display windowcap.release()out.release()cv2.destroyAllWindows()