Комментарии:
This is one of the best explanation thankyou somuch sir
Ответитьthanku a lot🙏😊
ОтветитьWhat do you mean by feature?
Ответить@krishNaik, I like your videos very much as they are quick reference guides for me to quickly understand something required for interview prep or for any project.
Just noticed here that, you mentioned Entropy is a measure of purity. But, it is a measure of impurity which makes more sense. The more the value of entropy, more is heterogeneity in the variable.
You don't explain the intuition though.
ОтветитьDear Krish Naik Sir.
Could you please recheck the calculation. As per my calculation entropy for f2 node where the split is 3|2 is 0.97 and not 0.78 ?
Kindly correct me if I am wrong.
Hi, there might be calculation mistake in the entropy part. its not 0.78. Can you please mention that in a caption in the video or a description. So that people dont mistaken it in the future. Great video!!
Ответитьtidak membantu
ОтветитьAwesome video.
Ответитьwhy this entropy in bits? as for normal its about 0.97, and how can i convert my entropy iinto bits
Ответитьi have one question .at root node is the gini are Entropy is high are low..
Ответитьvery well understandable your teaching curriculum.
Ответитьthank you
ОтветитьI SAID I LOVE YOU
ОтветитьCan we use same feature for multi level split in the decision tree?
Ответитьthank you. we all need teachers like you. god bless you. you're a blessing for us college students who are struggling with offline colleges after the reopening.
ОтветитьHi Krish, can you please explain the process of calculating probability of a class in a decision tree and whether we can arrive at the probability from feature importance
Ответитьgood explanation
ОтветитьKrish, I love you so much, more than my girlfriend, zillions like from my side. You always make knotty problems so simple
ОтветитьThank you, Krish sir.
Ответитьhow is 0.79 bits when you compute it? someone pls explain
ОтветитьThanks Krish
ОтветитьExplained in a great way ...Thank you krish
ОтветитьEntropy is thermodynamics concept measure tha energy, why using mechine learning.
ОтветитьNice explanation. But actuallly we dont use this formula while modelling. We just set the parameter of decision tree to either entropy or gini. So when does this formula of entropy really help??
ОтветитьSir, here u didn't mentioned that how f3 is in right side and how f2 is in left side node. As u said the attribute having less entropy is selected for split. This is understood but why f2 is on left and f3 os on right?
ОтветитьCan you mathematically explain how you obtained entropy=1 for a completely impure split(yes=3, no=3)?
ОтветитьBest channel for Data Science Beginners
ОтветитьIn the formula of Entropy what is the significance of log base 2, why not simple log having base 10?
ОтветитьDefinitely subscribe and tell my fellow other programmer to see and subscribe your channel, you are the best explainer i've ever seen!
ОтветитьGOOD ONE
ОтветитьYou clearly explain the mathematics of machine learning algorithms! Thank you for your effort.
ОтветитьGreat introduction to the topic, thank you
Ответитьexcellent explanation man, thanks
ОтветитьHi Krish,
Have you explained how decision tree works? because im not finding it
Nice explanation. Cheers =]
ОтветитьThank you for a great tutorial. The entropy value is actually 0.97 and not 0.78.
ОтветитьThank you, this was very helpful!
ОтветитьSuper Awsome!
Ответитьvery much helpful sir thank you you are best :)
Ответитьif we have very high dimensional data , how do we apply decision tree ?
ОтветитьOne of the great teacher in the Machine Learning field. You are my best teacher in ML.Thank you so much sir for spreading your knowledge.
ОтветитьI tried to purchase the going through the above pasted link but its showing unavailable now, could you please tell me how to get your book?I really need that,I follow your channel frequently whenever I face trouble in understanding any concepts of data science and after watching your videos it gets cleared so please let me know how to purchase your book.
Ответитьin my opinion, calculating entropy is sufficient and we don't require information gain, as in information gain we simply subtract from the entropy of attribute from the entropy of dataset; the entropy of dataset is always constant for a particular dataset.
ОтветитьAs Entropy of pure node is zero..I think Entropy is measure of impurity..lesser the Entropy..more pure the node is
Ответитьplease upload the video for regression tree also and discuss it in detail manner
ОтветитьYours videos are very nice, but you really need to improve the quality of your microphone
Ответить