Комментарии:
Hi, Thanks for the insightful tutorial once again. I have got 2 questions. First, I tried to replicate the same training with the alpaca dataset. But this time, I used 100, 20, and 10 for the train, test and validation datasets respectively. When I tried predicting using a short video containing alpacas, I got this message "0: 384x640 (no detections), 188.8ms
Speed: 3.3ms preprocess, 188.8ms inference, 3.5ms postprocess per image at shape (1, 3, 640, 640)" in multiple iterations. No alpaca was detected in the 1 minutes video clip. I want to know the likely cause of this.
Secondly, I am working on a traffic violation detection projects that requires me to annotate the road-stop-lines. I tried using the rectangle tool in cvat but it wasn't capturing the road-stop-lines effectively because the shape of the stop-lines doesnt take the shape of the rectangle that could be easily captured with a rectangle tool. So I opted for the polygon tool. Unfortunately, when I finished my annotation, I exported in YOLO format, but there wasnt any annotation data present in my format. I noticed that I could export in CVAT for video, but couldnt export in YOLO format. Could you please suggest ways to fix these issues. Thanks in anticipation of your response
Good day, can this be used with a user interface? How do I integrate it with tkinter?
ОтветитьYou just gave me a comprehensive insight about annotation of image. Thanks
ОтветитьI have followed all the steps, but when I run the code to identify an object , it does not work. There are no green boxes identifying thee boxes. The code just gives the same video as the output. Where am I going wrong?
ОтветитьBeen working something similar for a hackathon in my freshman year, first thought was to build a CNN from scratch (with no prior knowledge in computer vision) , then came across this video, you are a lifesaver man.
ОтветитьI don't have the config.yaml file. Do we install it ourselves?
ОтветитьThe video is good and simple EXCEPT FOR the fact that you used the same data for train and validation. Don't do that! That is arguably missleading. It wouldn't be that hard to teach your audiacne what the point of train, test and val is and using the same data for train and val is veerry missleading and confusing since it dosn't really give acurate results.
ОтветитьI am having the issue of the video not opening for me to test. In my output I just see no detection at ....... etc. Not sure what im doing wrong
Ответитьi don't understand on with IDLE you work your puthon code, where can i dwnload it?
ОтветитьThank you! for the detailed video.
ОтветитьI’m having troubles compiling since the syntax error says “config.yaml train: key missing” I made config look exactly like yours and use the absolute path and relative paths like you said.
I’ve also noticed that when I deliberately made different config.yaml incorrect, the same error pops up so it might not just be the “train” key that’s incorrect. What is your thoughts on this?
can i know the source from which u downloaded the image dataset other than the one u mentioned in the video cause i want to train the model to detect Ash trees ,any suggestions would be of great help
thank u
where i can get a full source code for Visual Studio?
ОтветитьAwesome video, thanks for talking the time to make it! Would you be able to suggest an interesting way to extend the YOLO algorithm? I am researching ideas for a Masters project in AI…
Ответитьbro can you do one using yolo nas
Ответитьso i've follow the tutorial and its really great, but im a beginner so this is probably stupid question, i only use 10 images of alpaca to train yolo, and it show at confusion matrix, that there's not even any blue color for alpaca classes, and its only show blue color and 1.00 on background, does that mean yolo still cannot detect alpacas? if that's the case, in order to fix it, the only way to solve it, is put more alapcas image to train right?
thank you
Thank you for this amazing video! Ur gonna be gaining a lot of subs and likes at this rate :) <3 !!! btw Is there a tutorial for the last part of this video, i.e the python script to load in the trained YOLO model for testing on a specific mp4 video?
ОтветитьMy question is that is there a way to store the information ? like, whatever it detects in the picture, putting them in an array so they can be used afterwards. (In this case like array1 = [ alpaca, squirrel ] and then use this array to process some information.) Also the second question that, how to save the model from this app like if I am using Anaconda ? so that it can be used for an app that is to be created.
Ответитьcompletely waste of time
ОтветитьHola! una consulta. como puedo hacer para solo descargar la clase alpaca con sus anotaciones y no todas las clases. Saludos!
ОтветитьCan I use 3d models for education? and than work with photo recognition?
I mean for multiple objects much simple find 3d model than lot of images.
Very good, thank you very much
ОтветитьThank you for the great video! Would love for you to dive into the fine-tuning and optimisation for this model. Do you have anything like that? For example, improving results using statistics of the ground truth images, hyper parameter tuning, etc...
ОтветитьThanks for this comprehensive tutorial. What if the goal is to add an additional class to the available classes in YOLOV8. Particularly, annotation tool starts annotating from label number 0. However, class 0 is available in corresponding classes in YOLOV8. does CVAT allow to label from the last index of COCO dataset to add a new class to the available ones in YOLOV8?
Ответитьyou didint mention how to config the config.yaml and also if there is a data.yaml
Ответитьi need your help
ОтветитьSir we want to get that .yaml file sir
ОтветитьDelicious!
ОтветитьHow can we evaluate this model?
ОтветитьThank you for the amazing help, I just have one question involving the car and license plate detection model you used in your video "Automatic number plate recognition with Python..." I am using the same data set you used in that video, but in the dataset directory is "test, train" and "valid", does this mean I still have to annotate the images myself or has that step already been completed? Thank you!
ОтветитьHello sir, I want to ask about the dataset, if I have a large dataset, for example about 14 GB. Is there another way besides uploading the dataset to Google Drive? Because my Google Drive capacity is almost full. Thank you.
ОтветитьThanks!
ОтветитьThis is a great tutorial, thank you!! :)
While I followed all the steps in the video, I did not see any object detected even with 100 epochs. Am I missing something here?? Or are there any rough assumptions on how much-labeled data we need in order to train the model? My case is object detection of cars.
EN QUE MINUTO HABLA DE COMO SE EXPORTA EL MODELO PARA QUE APARESCA USABLE EN EL VIDEO DE LAS LLAMAS?
ОтветитьThanks !!!!!
Ответитьis there a way to do some code, in exemple in the video at the end like to say if the model find the alpaga do this or if you dont find it do this but in reel time ? is there a way to retrief a booleen value of is the object is find or no ?
ОтветитьHi Felipe , I have run into problem, since I successfully compiled my code but unfortunately it wasn't showing the runs folder which leaded to the results, plz help me out.
ОтветитьHow to overcome the '' index error :list index out of range '' during training period, please tell me as soon as possible 🙏
ОтветитьThank you very much for your tutorial, I have one question "How testing YOLOv8 on Google Colab?"
ОтветитьHola Felipe, estoy siguiendo tus pasos para entrenar un modelo con mi propio dataset pero para conseguir resultados decentes necesito muchas epocas y realizar el entrenamiento al menos con yolov8m, además son muchas imágenes, por lo que tarda muchísimo. He leido por ahi que puedo utilizar la GPU (con pytorch y cuda) para correr este entrenamiento, pero me da este problema:
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if _name_ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
Podrías añadir las líneas de código necesarias para correr el código que has usado pero en la GPU para que no me aparezca este error?
i dont how he got the videos and how hes able to apply the bounding boxes/predictions on them
ОтветитьHi Felipe, thank you very much for your tutorial. May I ask you a little question plz? I trained my model also in 100 epoches, but after when I want to test the video, I have a following error:
AttributeError:
'Results' object has no attribute 'names'. Valid 'Results' object attributes and properties are:
Attributes:
boxes (Boxes, optional): A Boxes object containing the detection bounding boxes.
masks (Masks, optional): A Masks object containing the detection masks.
probs (torch.Tensor, optional): A tensor containing the detection class probabilities.
orig_shape (tuple, optional): Original image size.
Could you plz help me on this? Thank you.
Sir can you suggest me from where can I download medical images with reports.., thank you
ОтветитьThanks man
ОтветитьHi Felipe! Great video! You've earned a new subscriber.
By the way, I was trying to run your video detection code and got a little confused on what file should I use if I want to use the custom trained model. Ive tried to run the code but it gets no detections. During the training phase, it showed great results and with a 84% accuracy. Could you help me please?