Комментарии:
Thank you 🙏
ОтветитьWhy do we do a weighted sum of the entropies ? What is the intuition behind weighting them and not simply adding the entropies of the splits ?
Ответитьgiang bai ma long ghep tum lum het nha :D:D:D met ca nup lum bat bo tu bi gio :D:D:D thay hu qua diiii
ОтветитьHi can anyone please explain why equally likely events are a problem in decision trees? What I understood from it was that the model will need to be very comprehensive to tackle such cases but I am unsure of my insight.
ОтветитьI'm watching this end of December 2021, I found the demos at the end starting roughly at 45 mins in the video very informative about the capabilities and limitations of a decision tree. Thanks.
ОтветитьThank you for sharing your content.
It is very interesting. Especially the discussion about why we do this ( computational problems, NP-hard, people tried many splits and found out it was the best in practice), the interactive examples at the end (very useful for learning) and all your work on trying to make it clear and simple. I like the point of view of minimizing the entropy from maximum the KL between two probability distributions.
In fact, it is also easy to see the Gini impurity loss function as an optimization problem in 3D also (you get a concave/convex function by computing the hessian matrix with two parameters as the third one is just 1 - p_1 - p_2) and you have to optimize it on a space (conditions on the p_i) and you can actually draw the function and the space. You get the maximum/minimum at 1/3 for p_1, p_2, p_3 (what we don't want) and it is diminishing as we move away this point (with the the best case for one which is 1 and the others 0).
This is perfect. I am coming from a non-technical, non-math background; and this presentation really made me understand DT easily. Thank you very much.
ОтветитьForgive me if I'm wrong but if a pure leaf node with 3 classes that results in P1=1, P2=0, P3=0, the sum of Pk*log(Pk) would be 0, so the idea would be to minimize from the positive entropy equation?
ОтветитьPk is zero means k is infinity how it is possible, Q at 39.00
ОтветитьI have some problems about regression.I wonder if I can discuss them with you.
ОтветитьMachine Learning ~ Compression 💡
ОтветитьThanks Professor Kilian Weinberger. Examples in the end was really helpful to actually visualize how trees can look like.
ОтветитьThank you very much, Prof. Weinberger. I was reading The Elements of statistical Learning as my reading course, then I found your channel. I truly appreciate your lectures also your notes, I print all of your notes and watch your almost all of your videos, they are extremely helpful. Thank you, I really appreciate that you let us have access to your wonderful lectures.
ОтветитьThis was amazing. Thank you very much
Ответить