Комментарии:
Haven't finished the video yet so I apologize if you already fixed this / went over it. But I noticed around the 9 minute mark we're told to use "writer.add_graph(model, example_data.reshape(-1, 28*28))" which works, but only if you're using the CPU. As example_data is currently on the CPU (unless I did something wrong which is very possible). I'm using a GPU and all that was needed for me to fix it was change that to "writer.add_graph(model, example_data.reshape(-1, 28*28).to(device))" and boom problem solved. Anyways, awesome tutorials!!!
ОтветитьThis helped me a lot. Thanks for your kind explanation!
ОтветитьHi Patrick, I followed this tutorial but when I run the code and refresh the url, TensorBoard is not showing any images for me. Nothing happens, it just shows - "No dashboards are active for the current data set."
NOTE: I am running the program in Jupyter Notebook.
⁰0
Ответить0
Ответить0000
Ответить⁰⁰0
ОтветитьFor running acc computation formula should be correct_count/(100*batch_size)
ОтветитьGerman ingenuity again.
ОтветитьFor the life of me I could not get this to work. I had 10 problems right out of the gate; fixing one problem caused another to get worse. Tensorboard sometimes worked and then would stop. I've never been this frustrated with computer issues before. I'm giving up
ОтветитьMaybe a little late to the party. Nice Tutorial. The code for deriving a pr curve is particularly helpful. However, there is something wrong with the results. A "perfect" pr-curve (step function at "1") makes no sense. The curve should "fall off" as it approaches "1". Secondly: If you had built in a global step over several evaluation runs, then you could also have glided over the different PR curves in the tensor board, which is nice to see how the model learns.
ОтветитьThank you so much for this helpful tutorial. 🍀🙏
ОтветитьHello, why do you append the predicted data, when the documentation says that it needs to be ground truth? I find that a bit confusing :(
ОтветитьThis tensorboard --logdir run is giving syntax error. What to do
??
Another great tutorial, thanks a lot! I have a small question: how can I clear TensorBoard?
Ответитьat 10.22 if running_loss should be running_loss += loss.item() * inputs.size(0) as in transfer learning tutorial?
Ответитьvery helpful! thanks. please keep uploading more tutorial for pytorch
ОтветитьThis was really helpful, thank you!
Ответитьsir could you upload videos on audio dataset by using pytorch?
ОтветитьMay I ask ur VScode theme? Thank u
ОтветитьNice tutorial, I just have one concern. Suppose that your batch_size is 64, in that case, you would have a total of 938 batches, with the first 937 batches having 64 examples, and the last batch having 32 examples. If we specify, (i+1)%100 == 0, then we are computing the average loss and accuracy for the 100 steps. But when the value of i exceeds 900, you would accumulate the loss and correct predictions for the remaining 38 batches, and then add them in the next epoch when the number of steps becomes a factor of 100 again (in this case 100). So, essentially, you would be computing the loss as [loss (38 steps from the last epoch) + loss (100 steps from the current epoch)] / (100) which would increase your loss and also increase the accuracy. Just wanted to highlight this. A good idea would be to add another variable called steps_seen, which is incremented every time a batch/step is processed and set that to 0 similar to running loss and correct predictions. In this way, even when you compute the loss when the current step is not an absolute factor of 100, you would still compute the loss and accuracy as -> [loss (38 steps from the previous epoch) + loss (100 steps from the current epoch)] / (38 + 100).
ОтветитьAre you from Germany? :)
ОтветитьVery clear. Thanks
ОтветитьGreat Vid! I think you shouln't have appended predicted to your labels because that not the ground truth (correct label) and it is the estimated/predicted label, thats why you get a perfect PR curve
Ответитьline 82 of your github code. You probably should .to(device) reshaped data?
Ответитьhello python engineer! i am facing an issue, I have successfully installed tensorboard and it is running fine as well, whenever I try to run my code file and then refresh the tensorboard browser but it doesn't show any thing like images and graphs etc, one thing is very strange it is not showing any error too. but why I am not getting all images and graphs on the tensorboard browser..
please give me any solution,
I am using your given code to practice
Excelent explanation! extremely useful, thanks
ОтветитьHmmm shouldnot the last line of the code in line 157 the writer.close() be out of the for loop? what does the writer.close() do basically?
ОтветитьHey, Patrick, thanks for the great Pytorch series. I hope you can keep making it and please send my special hugs to your single (and handsome) German friends... haha <3<3 ;)
Greetings from a funny Brazilian woman... :D
Hi sir, I have a problem in tensorboard with pytorch. the writer object can't be detected by tensorboard . "No dashboards are active for the current data set."
Ответитьthank you so much!! this really amazing
ОтветитьVery well done!
I am watching your videos to revise my info. :D
Please make more videos on Pytorch concepts
ОтветитьHello Python Engineer, thank you for this video, I relly found it helpful. I am having one challenge though, how can I run the visualisation on a gpu server (nvidia gou) that I want to use for my training?
ОтветитьJust finished the pytorch playlist. Loved your content. Will you making tutorials on RNN and LSTM with pytorch?
ОтветитьHello, there is one issue in the line writer.add_graph, example_data should add a .to(device) function. And I have a question about the use of torch.stack, is the aim of this operation is transforming the data type of per batch from list to tensor? I'm a little confused
ОтветитьCan please make a tutorial on "How to use Weights and Biases"
Ответить