Комментарии:
east or west naik sir is suppper duper best
Ответитьyou are one of the best teachers any student can have..❤
ОтветитьI don't understand after knowing the clusters we draw the histogram in hierarchical clustering and you are showing we need to draw a parallel like and the number of vertical lines it intersects will be number of clusters?? I mean we already drawing the histogram based on the clusters. Doesn't make sense what you told.
ОтветитьK means clustering is not mathematically clear. The line you're drawing connecting the two centroids is ok, but how does that perpendicular line drawn. means how is that perpendicular line decided? Also for any new point, will that line be used to classify for k nearest neighbours is to be used?
ОтветитьHello sir take care of your health
Ответитьsilhoit score
ОтветитьAndrew NG of INDIA==Krish Naik Sir
ОтветитьYou are the best teacher that I have in my life in this domain,thanks a lot to share this kind of knowledge...
ОтветитьBeautiful sir....
ОтветитьSilhouette score
ОтветитьDepends on the data points
Ответитьthanks! really want know about exact definition of bias & var
great teaching
Thanks for this great Tutorial.
Ответитьcan i know the matrial link ?
ОтветитьKrish Naik Sir is Awesome
ОтветитьHi krish sir its learning from you.
Can you please detailed video of Principle components analysis
Please make some videos on soft clustering algorithm (ex. Fuzzy C Means)
ОтветитьThank your sir Krish
Ответить10/10 rating
ОтветитьSir, if low bias - high variance is overfitting and high bias - high variance is underfitting , then what is high bias - low variance ?
Ответитьsilhouette Code is dam tough to understand Sir 😞
ОтветитьSir pls make a video ON pea
Ответить@KRISHNAIK SIR, KINDLY PROVIDE THE DBSCAN VIDEO LINK
ОтветитьThis video is incredible, and very well explained . But if we have more than one feature in our dataset, should we make the feature selection first and then perform the elbow test?
Ответитьek number session ... in easy terms ... BIAS is the inability of ML algorithm to capture the 100 percent or exact relationship. To understand bias one must think why do we need a ML in first place. In mathematics or physics we have absolute relationship or formula between dependent and independent variables like s=ut+1/2 at2 (std 7 Physics) or SI = P*R*T so for computing cases like we have absolute formula we don't need any ML algo. ML try to do the same i.e. estimate a formula, let say I want to calculate the purchasing power (P) so I train a model with different variables like income,age, family income and m model fetches a formula P = wo+ b1*income+b2*age + b3* family income..... So this formula is not absolute or universal as its derived by a specific ML algo for specific data but let say by miracle we derive a formula that exactly calculates the purchasing power with 100 percent accuracy so for that model bias is 0 as the model accurately captures the relationship..... Variance ---- Talking about variance, in short way the difference in fits between data set is called variance , imagine we used that same miracle formula in test data and data fits 100 percent as in we get 100 percent accuracy(for different test set) then we can say that the variance is 0 which means the ML formula is perfect or let say when use the same miracle formula in test set we get 50% accuracy which means the bias was low but variance is high as formula didnt work well with unseen (test) data... SO in an imaginary world if bias is 0 and variance is also 0 then my friend you have discovered a formula not an estimation .... In a practical world we aim for a model with low bias and low variance..... Subscribe Krish Channel if this helped
Ответить1.75 speed is he best way to watch and lot of information covered in less time
ОтветитьI didnt find the githuub link sir
ОтветитьMil gya bhai ml padhna ka channel ekdum maja aagya sir
ОтветитьIn k means clustering, is there an assumption in numbers of observations and variables? Would having variables greater than observation affect the results of clustering and make it less accurate?
ОтветитьHello sir. Do you, by any chance, know about the assumptions of k means cluster analysis in the case of large variance?
ОтветитьFirst thing First !
Great session 👏 👌 👍
Good morning krish.. You have really made my foundation very strong before that I was null in statistic and machine learning since from non technical background.. Now I can read very high level books and could really understand.. You are really great value addition to my learning path..
Ответить10 out of 10
Ответитьsuperb.....!!
ОтветитьThanks
ОтветитьWhat are the type of Biases can there be in a dataset? how to answer this question ?
ОтветитьHello sir I started every morning with a new session of machine learning. And last 6 days teach me a lot about machine learning algorithms. Thank you very much for this playlist.
ОтветитьExcellent and knowledge gaining session and every second spend was gain. Thanks alot 😊 keeping helping and sharing the knowledge & concepts 💐💐💐
Ответитьfinished watching
ОтветитьA humble request to you @Krish,make next live session streams on Machine learning practice and practicals
ОтветитьWhere is the Github link for this?
ОтветитьThank you so much sir❤️
ОтветитьIs the silhouette score applicable to hierarchical clustering? as some clusters are within other clusters. How do we differentiate a(i) from b(i) then?
ОтветитьHello @Krish, thank you for the explanations. Please do an extensive depth in EDA sessions next. I appreciate your efforts very much, thanks again.
Ответить10/10
Ответить