Комментарии:
This video is amazing, It really helped me understand the math behind CNNs - thank you!
ОтветитьI'm an undergrad student studying CS at Georgia Tech. This video explained the backprop in CNN's better than my professors. A true gem.
ОтветитьMan , I am mind blown by your content . Such a great video 👌
Ответитьthank you very much for this masterclass
ОтветитьOne day I will understand this!
ОтветитьThanks a lot for this video. Couldn't be more grateful!
ОтветитьAmazing video as always, I have a question which is for both this and the previous neural network video, one perk of using matrices is that you can perform batch input calculations, i.e instead of giving a single input, we give a batch of inputs. I figured out a way to perform batch computations for your previous video using help from Sentdex channel, but how would I go with this one is beyond me, so hoping you could implement batch input computations for cnn too. Thanks
Ответитьawsome, thank you for the tutorial it really help me out to understand about cnn with easily
ОтветитьSomebody please explain why the depth convolutional layer input is 3, what they are representing ? .....Thank you in advance.....
ОтветитьI have started my AI journey a month back and I have lots of confusion as how these CNN are getting parameters and how is it passing through layers and why reshaping and many more queries. I give full star to clear all the doubts on this video. This is saviour for me in my AI journey.
ОтветитьI think if you were to include batch size into the code
We would be able to use the generalize method to train
Cause what you are doing is stochastic gradient descent no?
I know it’s a bit late, but I thought I should mention how well this video is paced and structured. The listing and crossing out of what topics are to be covered makes the video very clear, concise and easy to follow.
ОтветитьI have a question beyond math and software. By convolutionalizing a picture, we fade it. Why are we doing this? What is the added value of the Convolution process?
ОтветитьGreat Video!
ОтветитьThank you so much
ОтветитьFatnastic brother!
I really apprciate what you are doing
thanks🎉🎉
Best Explanation... lv ur teaching style and animations
Ответитьty for your hardwork,best explanation for cnn,btw how about cnn 1D for time series data?is input shape just depth and width?
ОтветитьI had so many aha moments here! this is awesome
ОтветитьThanks for the great video. Do you mind sharing how you make such animations? Is it Adobe illustrator or something easier to use?
ОтветитьI actually do have a question. What? Also, How?
ОтветитьI am a little bit unsure about the bias. From all the blog posts I've read apparently bias is supposed to be a vector, a scalar value per filter but here it is shown to be a matrix? I am unsure which is the correct convention.
Ответитьwhy are you writing X1, X2, Xd. isn't it the same input X?
ОтветитьI am making a CNN from scratch and I was a little bit stuck on how to find the gradients of convolutional layers but that little digression about how the equation of a convolutional layer is really just a more general version of the equation of the dense layer output really made it clear for me! This video is gold
ОтветитьBEST video! Thanks a ton!
ОтветитьThis in a very good tutorial to learn about CNN. Thank you so much.
ОтветитьThe best-ever tutorial. thank you.
ОтветитьHow to get 8 answer how to get that 8
ОтветитьThis is for real one of the best videos related to any type of NN I've ever seen. Most videos just scratch the surface of how these NNs work, but you went deeper and in an understandable way. Congratulations and keep the good work!
ОтветитьI feel like I don't deserve to get such content for free.. Amazing job!
ОтветитьHey, thanks for your knowledge, please share more❤
Ответитьvery high quality video and amazing explanation!
ОтветитьExceptional explanation, thank you for sharing this.
ОтветитьYou are really good at what you are doing sir. Thanks a lot for sharing such insightful videos publicly to the internet. ❤️
Ответитьdoes the convolutional layer always have a stride of 1?
ОтветитьTrue that. After reading so many blogs on Medium, none could solve all my doubts. You did it. Kudos to you.
Ответитьthanks
ОтветитьHow would you backpropagate through a tied bias (1 bias per kernel) instead of an untied bias (which you show in the video)?
Would it just simply be the sum of the "output_gradient" per kernel? (sum over 2nd and 3rd axes?)
Amazing video btw!
Amazing stuff and beautifully explained! I just had one doubt regarding the backpropagation for the problem statement in the end(MNIST problem) where the input is an image, do we still perform a backpropagation with respect to the input(w.r.t. x)? Isn't the input constant(as the image input is fixed) and hence non-trainable, unlike the kernel and bias? I guess it might work in generative neural networks. Please correct me if I am wrong though.
ОтветитьThis is amazing man. Very informative!
ОтветитьYou could implement the hypermatrix operation you talked about in the beginning of the video in order to simplify the forward and backward functions of the Convolutional layer, removing all the loops
ОтветитьThis is the best and calm explanation in NNs that I have ever seen on Internet! Amazing work, definitely sharing it to my colleagues
Ответитьthe "from scratch" series you made is pure gold!!
ОтветитьHi, Great video was just wondering if you have the paper/book references for the maths used in this video?
ОтветитьAfter going through many blogs, this helped me just fully understand these networks. Such a great teacher you are!!!
ОтветитьThis is amazing work, thank you so much :)
ОтветитьGreat video - first I saw that really shows how to thing about implementation
Ответить