Комментарии:
Clear as blue sky! Thanks
ОтветитьDoes each neuron in a specific hidden layer have the same activation function?
Ответитьcan we take bias except 0&1 ?
ОтветитьVery clearly explained! Thanks to you!
ОтветитьSir, how can we change the activation function via Backward Propagation?
ОтветитьHi, can you tell what is the range of the weights.
ОтветитьAll the neurons in a convolution layer have different Weights and Biases.? please anyone reply to it. Because when i searched in Google & Chatgpt i got different answer. Is biases is same or different?
ОтветитьOne of the clearest explanations on this I've ever seen! You have a real talent for simplifying complexities and communicating them.
ОтветитьI am sorry to say thi but this is not what i have studied. How can you add the weights between your dataset attributes and input layer? The first connections (which have weights i.e. the first weights) arrive between the neurons of the input layer and first hidden layer.
This is not what you have explained.
Can we say weight is same as slope?
ОтветитьThank you for making this video.
ОтветитьBest explanation ever seen sir
ОтветитьExcellent presentation. Good job.
ОтветитьAur chota bord lele bhai
ОтветитьGreat !!!
ОтветитьTrying to get into the basics but i didn't get any till i found u aman thank you so much
ОтветитьI have been watching videos in 3Bleu1Brown channel for last couple of days and trying to understand how neural network works but a lot of jargons clouded my understanding so much that i failed to catch this simple operation. You have nailed it, man. May Allah bless for making my day. So grateful, Sir.
Ответитьyou are the best man
ОтветитьYou did not explain Bias
Ответитьthank you very much!
ОтветитьBest😭❤️❤️❤️❤️
ОтветитьHey, this was great. Thank you!
ОтветитьHi Aman , In your videos , you have showed the example of one neuron with random weights and bias . would the weight will same for the other neurons in the hidden layer ?
ОтветитьHi Aman , Great job !! :)
Can you also add all the deep learning algorithms as well such as RNN , CNN , LSTM and conv 2d ?
I saw some models using weights and biases between 0 and 1 and some -1 and 1. Is there any difference or advantages of using one over the other?
ОтветитьAwesome sir
ОтветитьAwesome explanation🥰😍
ОтветитьThanks
ОтветитьMay I have the link of the video about how to assign weights?
ОтветитьI've been looking for a love button amongst the like buttons 🥰. Your explanations are so simple and understandable! Thank you
Ответитьokeh
ОтветитьThank you for the explanation!
ОтветитьDamn you are so good
Ответитьvery well explained
Ответитьif he was born with american accent he would give lessons at MIT
Nice explanation
very good video
Ответитьhow to calculate bias value?
ОтветитьThat's why we need to perform training in order to find the appropriates weights and bias, training a neural network is basically trying how many times the neuron sees the given datasets, and then will do the gradient backward propagation to find the values of the weights and biases. If you try to explore what's inside a pre-trained neural network model for example, what you will see there is only a bunch of those weight and bias values and also activation function (i.e sigmoid, ReLU, GeLU). There is a software used to open that pre-trained model called Netron by the way.
ОтветитьIs there any range values occured for weight and bias ? For example from -1 to 1 like that?
ОтветитьThanks for this. However can I ask, How do plot those weights and identify the input with the highest weight
ОтветитьBeautifully explained, thank you very much.
ОтветитьThank you very much sir, very nicely explained
ОтветитьVery nice sir but one thing I want to know ,on which base the weights are given? Or it is just supposition?
ОтветитьBeautifully Explained . Couple of quick questions 1) How do you decide on the number of neurons in a hidden layer ? 2) How do you decide on the number of hidden layers ? . Thanks
ОтветитьFinally I watched the right lecture.
ОтветитьVery bad, what is weight what is the purpose of it, what is bias what is the purpose of it, that is why I watch this video, you don't explain any of them
ОтветитьHi. Thanks for the video. Can each input be linked to only a subset of nodes in the hidden layer?
Thank you.
Thank you for this explanation. Concepts with examples really helps in easy learning
ОтветитьGreat session.. In my opinion Using big board is better na
Ответить