Комментарии:
Hey Josh!
Amazing videos, thanks a lot.
Would be great if you could cover Time Series Data and algorithms like ARIMA and HOLTS WINTER
Thanks😊
YOU ARE THE BEST TEACHER EVER JOSHH!! I wish you can feel the raw feeling we feel when we watch your videos
ОтветитьWhen I run the code it says: " 'BasicNN' object has no attribute 'w00' ". I installed the packages correctly and I also used your code from the description :(
ОтветитьI have lived long enough to watch videos and understand nothing about ML stuffs, until I saw your videos. I truly wish your well being <3
ОтветитьThat Was Nice ! Thank You
ОтветитьAnother excellent video, one humble request please provide video on Stable Diffusion Models.
ОтветитьJust as a benchmark, if you wanted to optimize all the parameters again (instead of just the final bias) by making requires_grad=True for all parameters, how long would this take to solve? That would still be a trivial model to optimize right?
ОтветитьThank you :)
ОтветитьThat's really cool explanation! Please continue this PyTorch series, we really need it. BAM!
ОтветитьIf anyone is interested, I coded this from the ground up just using NumPy. Would be happy to share my code with whoever!
ОтветитьAmazing job! I plan to donate to your patreon page. You were confused because we could use .backward on loss (or at least I was confused by this). I guess one explanation is that loss is defined in terms of output_i and output_i is an instance of the model class. So it may make sense that we can access the backward attribute of loss. But I was, for the same reason a bit surprised that we can subtract a scalar from output_i. One other question. Wouldn't it be better to take the average of total loss? Otherwise the condition that uses 0.0001 is dependent on the the number of examples in the training set.
ОтветитьThanks for teaching in such a simple way !
But I'm a little bit confused about the code below:
total_loss += float(loss)
since you accumulate the loss ( the implementation is inside the model)
will this step double-counting the loss ? (or it is just like prefix sum ... )
I think there is a bug / flaw in this program:
I cannot get this code to backpropagate the entire neural network properly unless I move the optimizer.step() and optimizer.zero_grad() inside loop <for iteration in range(len(inputs)):>. As written, it is only inside the <for epoch in range(100): > loop. (Note: With SGD it takes a ~500-1100 steps with a lr of 0.035 to consistently work with randomly generated weight values.)
With my fix and using Adam instead of SGD and a LR of 0.02 I can do it with fewer steps (150), but also sometimes much more (2500). Not sure why that is.
Using your code my last graph didn't match the first one, it matched the second one....good video but yeah
ОтветитьPlease make an entire tutorial about the ins and outs of PyTorch!
ОтветитьWow 😮 I didn't knew I had to watch the Neural Networks part 2 before I can watch the The StatQuest Introduction To PyTorch before I can watch the Introduction to coding neural networks with PyTorch and Lightning 🌩️ (it’s something related to the cloud I understand)
I am genuinely so happy to learn about that stuff with you Josh❤ I will go watch the other videos first and then I will back propagate to this video...
loss.backward() is not working for me :-( any ideas ?
ОтветитьWhen I change w00 to -1.70 (i.e. put a minus in front of the number that the video suggested) and I set `requires_grad=True` then I thought it would possibly optimize this value back to around +1.70 — however, it fails. Any ideas of why that is?
ОтветитьWhat if we wanted to optimize all of the weights and bias?
ОтветитьI am no expert in pytorch but i think optimizer object is tracking the gradient instead of the loss object each time you call .backward(). Just guessing
Ответитьthank you so much to explained it so clearly, if I didn't click the sumb up button, that will be my guilty
ОтветитьThanks for the great video. Does this apply directly to GNN? Can I apply it there?
ОтветитьThis series about neural networks and deep learning is very well explained. Thank you soooooooo much.
ОтветитьAnother charming, fully informative masterpiece.
ОтветитьAbsolutely brilliant!
ОтветитьHi Josh. I am a big fan of your videos. I have a question regarding this quest. In this video, we optimized only one parameter. How can we optimize all the parameters? Thanks in advance.
ОтветитьHello! Thank u so much for this video. What is the difference between Keras and Pytorch?
ОтветитьVery amazing for those who start PyTorch, however it would be perfect if you teach how to make layers.
ОтветитьEEEpochs
ОтветитьAMAZING video. This is exactly what beginners need to start the Pytorch journey with a semi solid footing instead of mindless copying.
Yoy must have spent so much time for your AWESOME videos.
GREATLY appreciate your effort. Keep up the good work.
Thank you, Josh for all the help you have been proving to me, you have no idea!
I was curious on how to optimize all biases and weights just like we saw on the Quest however, the "optimizing function" seems to only be able to optimize the final_bias variable, I could not understand why. Is it something connected to the pytorch library and the optimizer variable?
Have a great week!
Soooooooo thankful!
ОтветитьReally Awesome
ОтветитьI love you Josh. God bless you. You're my favorite teacher.
Ответитьgreat presentation!! thanks again for simplfying this topic! are you planning to post more on NN implementation? computer vision maybe or object detection?
Ответитьthanks Josh, you really make understanding Neural Networks concepts a great process!
ОтветитьHello, How should I follow your videos. I find that theres alot of information and I seem to forget it. Like how do you commit this stuff to memory?
ОтветитьGreat series.
ОтветитьLooking forward to seeing your following videos! Excellent explanation!
Ответитьit's great that you are making videos on coding as well.
ОтветитьI‘m still waiting for other videos to learn more about PyTorch
ОтветитьBest tutorial like usual! would be nice to see more advanced examples of in pytorch, like CNN for image classification :)
ОтветитьThanks for this amazing walk through.
ОтветитьThank you very much! I am new to Deep Learning. I can say that just in one week i learned a lot of things from your tutorials!
ОтветитьThank you, very informative for me... working great for final_bias veriable but when I try to optimize other veriables like w00 or b00. code is not optimizing them. Can you please help me how can I optimize other veriables, what i am missing?
ОтветитьWe needs a statsquest that features statsquatch
ОтветитьYour videos are so cringe but I like it. xD
Ответить