Object Counting - Ultralytics YOLOv8 Docs #6738
Replies: 33 comments 136 replies
-
this even after downloading ultralytics as "pip install ultralytics" ModuleNotFoundError: No module named 'ultralytics.solutions' I get the error can you help? thanks |
Beta Was this translation helpful? Give feedback.
-
How do you save the output as a mp4 file ? |
Beta Was this translation helpful? Give feedback.
-
How do I select a particular object class (eg. person) to count ? |
Beta Was this translation helpful? Give feedback.
-
how do you set the parameters of the tracker ? eg thresh level for bytesort |
Beta Was this translation helpful? Give feedback.
-
what does INCOUNT and OUTCOUNT mean ? I have fish swimming in one direction from right to left but I still see figures in both INCOUNT and OUTCOUNT ? |
Beta Was this translation helpful? Give feedback.
-
Somehow the InCount and OutCount has values even though the fish was swimming all in one direction. |
Beta Was this translation helpful? Give feedback.
-
import cv2
from ultralytics import YOLO
from ultralytics.solutions import object_counter
# Load the YOLO model
model = YOLO("yolov8m.pt")
# Path to the video file
video_file = "video.mp4"
# Open the video file
cap = cv2.VideoCapture(video_file)
# Check if the video file is opened successfully
if not cap.isOpened():
print("Error: Couldn't open the video file")
exit(1)
# Define the region of interest (ROI)
roi_start_point = (820, 800)
roi_end_point = (1090, 1000)
region_points = [(820, 900), (1090, 800), (1090, 800), (820, 900)]
# Initialize video writer for output
video_writer = cv2.VideoWriter("object_counting.mp4",
cv2.VideoWriter_fourcc(*'mp4v'),
int(cap.get(5)),
(int(cap.get(3)), int(cap.get(4))))
# Initialize the object counter
counter = object_counter.ObjectCounter()
counter.set_args(view_img=True,
reg_pts=region_points,
classes_names=model.names,
draw_tracks=True)
while True:
# Read a frame from the video
success, im0 = cap.read()
# Check if the frame is read successfully
if not success:
print("Finished reading the video")
break
# Perform object tracking with YOLO
tracks = model.track(im0, persist=True, show=False)
# Start counting objects in the frame
im0 = counter.start_counting(im0, tracks)
# Draw ROI rectangle on the frame
cv2.rectangle(im0, roi_start_point, roi_end_point, (0, 255, 0), 2)
# Check if the frame is not None before processing
if im0 is not None:
roi = im0[roi_start_point[1]:roi_end_point[1], roi_start_point[0]:roi_end_point[0]]
gray_roi = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray_roi, 50, 150)
contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Count objects within the defined area
for contour in contours:
area = cv2.contourArea(contour)
if area > 100:
if roi_end_point[1] < contour[0][0][1] < roi_start_point[1]:
counter.count(contour)
print("object count number:", counter.get_counts())
# Draw line on the frame
cv2.line(im0, (820, region_points[0][1]), (1090, region_points[1][1]), (0, 0, 255), 2)
cv2.imshow("Video", im0)
# Exit loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release video capture and destroy windows
cap.release()
cv2.destroyAllWindows() This python code does not perform counting! Will you add modules like line_position in the set_args class? can you help me? |
Beta Was this translation helpful? Give feedback.
-
How do I access in_count and out_count values? Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi, can I implemented it in android? The use case is to count palm fruit in palm plantation and hard to find signal. Previously I have succeeded in creating an object detection model using yolov8s and exporting it to onnx then to ncnn |
Beta Was this translation helpful? Give feedback.
-
Hi, even though I have cloned ultralytics repository and download setup.py file, I couldn't import ultralytics.solutions. When I write ' from ultralytics.solutions' it says there is no module named solutions,however other modules, assets, cfg, data, engine, hub, models, nn, trackers and utils are shown. |
Beta Was this translation helpful? Give feedback.
-
@glenn-jocher and @RizwanMunawar. |
Beta Was this translation helpful? Give feedback.
-
how can i get image of the frame whenever a person is going in or out.Like i want the image of frame when person is entering ,saved in a folder and leaving in another folder. @RizwanMunawar |
Beta Was this translation helpful? Give feedback.
-
Issue: Current implementation uses object tracking, but I want object counting without tracking. |
Beta Was this translation helpful? Give feedback.
-
Can we add more than 1 lines for object counting ? |
Beta Was this translation helpful? Give feedback.
-
hello, as you said, one object pass the region from left to right or top to bottom will be considered as output, can I restrict it to only count as output if it's moving from top to bottom |
Beta Was this translation helpful? Give feedback.
-
when i use region i will get an incount and an outcount, however, i would like it to count the number of detections in the region (eg. 3 detections in one frame, 1 detection in next frame) like as shown in the Conveyor Belt Packets Counting Using Ultralytics YOLOv8 gif shown above. Could i have some advice regarding this? |
Beta Was this translation helpful? Give feedback.
-
I have established a zone at the bottom of the frame and implemented object counting to track interactions between fish and soil. Currently, the counting is based solely on the center of the detecting box crossing the zone. I need assistance in modifying the counting mechanism to include instances where the fish's tail or mouth interacts with the soil. Can you provide guidance on adjusting the counting logic to account for these specific interactions |
Beta Was this translation helpful? Give feedback.
-
Hello, I am trying to use this to count the number of vehicles by class whenever they cross the line. Rather than giving in_count and out_count is there a way to achieve this to either classify the vehicles by class and save it into a csv file or similar to how the in and out count display to show them on the screen as it counts? Thank you |
Beta Was this translation helpful? Give feedback.
-
Hi, Im getting an error ModuleNotFoundError: No module named 'ultralytics.solutions' when I'm running my file any help here please |
Beta Was this translation helpful? Give feedback.
-
@RizwanMunawar can you please provide any estimate about how much time needed team for implement multiclass counting. Or are there any alternatives for that. |
Beta Was this translation helpful? Give feedback.
-
i can install "ultralytics.solutions" on vscode. please help |
Beta Was this translation helpful? Give feedback.
-
hey! I have the issue that as soon as I import ultralytics, the cv2.imshow does not work anymore, in the sense of that it just blocks without anything happen. Under the hood, the check_imshow cannot be passed, though it is also not failed. |
Beta Was this translation helpful? Give feedback.
-
How can we only detect, track and count persons and bicycles using yolov8? |
Beta Was this translation helpful? Give feedback.
-
Thank you Rizwan.
This worked for me.
If at the end I want to add the total persons and bicycles crossing the
region of interest then I'll just add the In count and Out count. how do I
access those through im0?
…On Tue, Jan 23, 2024 at 2:36 AM Muhammad Rizwan Munawar < ***@***.***> wrote:
@Moomal22 <https://github.com/Moomal22> you can use class argument. i.e.
class number for a person is 0 and for a bicycle, it's 1. the sample code
is mentioned below.
from ultralytics import YOLOfrom ultralytics.solutions import object_counterimport cv2
model = YOLO("yolov8n.pt")cap = cv2.VideoCapture("path/to/video/file.mp4")assert cap.isOpened(), "Error reading video file"w, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH, cv2.CAP_PROP_FRAME_HEIGHT, cv2.CAP_PROP_FPS))
line_points = [(20, 400), (1080, 400)] # line or region pointsclasses_to_count = [0, 1] # person and car classes for count
# Video writervideo_writer = cv2.VideoWriter("object_counting_output.avi",
cv2.VideoWriter_fourcc(*'mp4v'),
fps,
(w, h))
# Init Object Countercounter = object_counter.ObjectCounter()counter.set_args(view_img=True,
reg_pts=line_points,
classes_names=model.names,
draw_tracks=True)
while cap.isOpened():
success, im0 = cap.read()
if not success:
print("Video frame is empty or video processing has been successfully completed.")
break
tracks = model.track(im0, persist=True, show=False,
classes=classes_to_count)
im0 = counter.start_counting(im0, tracks)
video_writer.write(im0)
cap.release()video_writer.release()cv2.destroyAllWindows()
I hope this helps! Feel free to let me know if you have any additional
questions.
Thanks and Regards
Ultralytics Team!
—
Reply to this email directly, view it on GitHub
<#6738 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BFQ5L6L6JXA7475U3VK4IXDYP5ZCPAVCNFSM6AAAAABAECARIOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DEMJXGY4TQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
from ultralytics import YOLO def video_detection(path_x):
cv2.destroyAllWindows() #Hy guys i am using the code above to run my yolov8 model , but i a falling to implement
|
Beta Was this translation helpful? Give feedback.
-
from ultralytics import YOLO def video_detection(path_x):
cv2.destroyAllWindows() #Hy guys i am using the code above to run my yolov8 model , but i a falling to implement
|
Beta Was this translation helpful? Give feedback.
-
0: 480x640 1 human being, 261.1ms those are outpu when running a yolov8 model, is there a way to extarct them |
Beta Was this translation helpful? Give feedback.
-
Hi,
|
Beta Was this translation helpful? Give feedback.
-
Hi,@glenn-jocher If I select a set of hyperparameters like activation function,momentum,learning rate,batch size,optimizer,num_layers for tuning the model.After selecting the best hyperparameters ,I need to store it in a seperate yaml file like best_hyperparamter.yaml to pass into the training the model using cfg attribute to the model.train method.But If I passed it I get an error for some hyperparameter like activation function,num_layers. So,I saw some comments regarding the activation function for the model they described to put into the yolov8n.yaml file configuration by setting for example: activation: relu and then we need to load the modeifiedyolov8n.yaml file for further training.But can we able to pass without that method and also how should I set the num_layers hyperparameter for the model training. |
Beta Was this translation helpful? Give feedback.
-
Hey, Generally without saving that to file I could get around 6-7 fps more. The live preview took another few fps. |
Beta Was this translation helpful? Give feedback.
-
Object Counting - Ultralytics YOLOv8 Docs
Object Counting Using Ultralytics YOLOv8
https://docs.ultralytics.com/guides/object-counting/
Beta Was this translation helpful? Give feedback.
All reactions