Комментарии:
Let me know what additions you would like added to the code and if you run into any issues. Cheers :)
ОтветитьSorry sir, yolov4.weights file has been error
ОтветитьWhat about same person or car come again, will count new for it
ОтветитьHello, the Google drive link doesn't work
Ответитьhey, I would like thank you very much for this video. If you see this comment, could you let me know if there is any way to run this directly from a python script on live video?
Ответить@The AI Guy Thank you for your work. I followed your instruction, all works like in your video, but I have nor detections neither tracking. Output avi == input avi, and when I add --info in command line, I can see only FPS, no detection&tracking info.🤔
ОтветитьQuestion: When I run your notebook it runs well but boxes cant add to the output video!! I have same input and output (without boxes and labels :/)
ОтветитьYou have no idea how this video saved me, I cannot thank you enough
ОтветитьThats very cool, how can i get this to track soccer players on my video? and could it track data? Thx
ОтветитьHey I am unable to download pre-trained yolov4.weights file. Please help me out link is not working.
ОтветитьHey, the google drive link has expired for the weights. Is there an updated link anywhere?
ОтветитьHello
When I open yolo4.weight google drive link, l faced with 404 error, could you fix the link
Thanks!
i folow to your instruction but the video output has no bouding box. can you tell me why?
ОтветитьThank you for an awesome clear explanation, on object tracking. If I convert the tiny YoloV4 model to tensorflow lite can I use Coral USB accelerator to accelerate the inferencing? I just need a way to make a post process quantized edge tpu .tflite model.
ОтветитьHey great work with this, can we use this for lidar data aswell?
ОтветитьHi first of all thank you so much for putting up such a nice video, could you tell if we are interested in tracking only single person instead of all. can we do it?
ОтветитьNice video
I want to add a congestion measurement to this, do you know how please?
I am not getting any bounding boxes
ОтветитьI am getting errors when I try to run the code (conda env create -f conda-gpu.yml).
ERROR conda.core.link:_execute(730): An error occurred while installing package 'defaults::qt-5.9.7-vc14h73c81de_0'.
Rolling back transaction: done
LinkError: post-link script failed for package defaults::qt-5.9.7-vc14h73c81de_0
location of failed script: C:\Users\User\anaconda3\envs\yolov4-gpu\Scripts\.qt-post-link.bat
Does anyone know how to solve this issue? Thank you
How can i do this with yolov5?
ОтветитьLet's focus on optimizing it, to reach realtime.
Ответитьthanks for your extremely helpful content , I have a question is there a way to detect only one class using colab ?
ОтветитьCan i execute this code in colab
ОтветитьGreat video!
Do you know if there's a way to get the mAP on this model? I can't find the code to save the detections to the file in order to run any mAP algorithms.
Can we do this in google colab?, as my computer doesn't support gpu i am facing difficulty.
ОтветитьHow to store those bounding box coordinates to use them as trajectory data? Please explain this also.
ОтветитьHow do I create my own custom file. I need to do some down stream task with the tracking ID and bounding boxes.
Ответитьhope to see additional feature of counting the number of cars within the video and passing through the lane successfully?
ОтветитьNot getting any detection bro... i tried latest tensorflow gpu version i.e. 2.7 and also afew others like 2.5 and 2.3 but getting no bounding box... can you guess what's wrong?
ОтветитьI'm trying to run the object_tracker script, but it get stuck on frame #1 and doesn't move. It seems like a problem with the model conversion to TF.
Can someone guide me in the right direction?
Can I implement it with YOLOv5 with custom classes?
ОтветитьThat was very helpful! I know it prints FPS at each frame. Is there an easy way to print the actual frame number from the original video?
ОтветитьIf we can track objects then we can also measure speed and such right? How would you go about getting the (x,y) coordinates?
ОтветитьWhen I try convert the model:
conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 4554552 into shape (1024,512,3,3)
What should I do?
Hello I containerized the project, their is one error I created an issue on github
ОтветитьPlease do one for YoloR, thank you.
ОтветитьTo the guys for whom no bounding boxes appear in the video : create a new environment as he says in the beginning of the video and use this, rather than using one which you might have created earlier for some other purpose. This worked for me.
Ответитьhow to use the code without Cuda or does it specifically have to use Cuda to run the code?
ОтветитьHey, actually I tried this out and it turned out to be too low FPS while renderring, actually I want to implement this in a real-time moving camera where I will be detecting other vehicles on the road. What might be the solution to this very low FPS that I'm getting? (I tried using tiny weights too, which was very inaccurate, yet not giving too much FPS to be real-time)
ОтветитьGreat video, well documented code and sturdy implementation 10/10
ОтветитьDoes it work on only lidar data?
ОтветитьHii, very nice tutorial. I was wondering if you could count the vehicles after tracking them.
ОтветитьThank you so much for this video. Btw, can I apply this code into raspberry pi with camera installation?
ОтветитьCan you do this for google colab version?
EDIT: Guess what, you clearly used AI to predict my question and you answered it, thank you
Hello all, need help , how may i run this object tracking on webcam in colab. Regards
Ответить