Комментарии:
Interesting video. Thanks for sharing.
ОтветитьThat huge popped blister on his hand is lowkey distracting
ОтветитьYou miss a part of your skin Sir
ОтветитьThe voice sounds like Sebastian Thrun. Great guy :)
Ответить1. Train/Validation 2. Either 3. K-Fold CV
I've seen a lot of answers I disagree with in the comments, so I'll explain. First, the terminology is Train/Validation when used to train the model. The Test set should be taken out prior to doing the Train/Validation split and remain separate throughout training. The Test set will then be used to test the trained model. Second, the answers. 1. Obviously training will take longer doing it 10 times. 2. While training did take longer, you are actually running the same size model in production. All other things being equal the run times of both already trained models should also be equal. 3. The improved accuracy is why you would want to use K-Fold CV.
If I'm wrong, please explain. I'll probably never see your comments, but you could help someone else.
First of all, cover your disgusting-looking wound. Second, explain to people why cross-validation is needed in the first place and who needs it.
Ответитьwon't this make the model specialized for the data that we have??
Ответитьcan't watch it... the blister is too anoying
ОтветитьCan anyone share the answers for those questions please
ОтветитьLike because the blister made with a barbell.
Ответитьyour pen is a disaster
Ответитьihhhhh
ОтветитьThis is incorrect. You should correct this video, as you're encouraging people to mix their train and test sets, which is a cardinal sin of machine learning. Every time you say test set, you should be saying validation set. Test set can only be tested one time, and cannot be used to inform hyperparameters.
ОтветитьSo is this supervised, unsupervised or semi-supervised algorithm?
ОтветитьWhy is it so hard to find a simple, concrete, and by hand example of simple k cross validation? All the documentation I can find is very generalized information, but no practical examples anywhere.
ОтветитьHere in K fold CV, A model in each fold computes an average result. So entire 10 fold CV is an average of average? What does it mean by 5 times 10 fold cv? How it is different from the normal 10 fold CV? Can someone help me understand this?
Ответитьdo a simple practical example by hand, not just theory always. People understand better when there are actual numbers and you go through the entire procedure, even if its a trivial example.
ОтветитьSimple and beautiful
Ответитьcan't stop looking at the blister on his hand
ОтветитьCan anybody provide me the video link which describes the training and test sets by Mrs. Katie ?
ОтветитьBut this doesn't solve the issue of choosing the bin size, i.e. trade-off between training set and test set (although you are now using all the data for both tasks at some point).
ОтветитьHi bro, thanks for sharing this learning..... ??just a question?? with which application do you make this tutorial? it's amazing... your text came on and above your hand.
ОтветитьThanks, clear
ОтветитьI have a small dataset of 48 samples if I have applied MLP using 6-fold, Do I still need validation set to avoid the biased result on the small dataset? Please suggest.
ОтветитьThanks for the video! Quick(silly) Question: in any of those validation methods, every time you change training data, are you going to re-fit the model? If so, every time validating step is respect to different model fit. Then how you determine your final model decision?
Ответитьthank you very much , that video is helpful ..
Ответитьhey guys from ECON704
Ответитьso, what is the difference with test_train_ split with test size=0.1
Ответитьthe hand is so annoying
Ответитьwhat are you even saying? can't understand anything!
Ответитьi know its very old video but still its not necessary to show your hand while writing
ОтветитьNice !
ОтветитьInteresting to see that your video presented like this, mind to share how do you present your drawing like this?
ОтветитьGreat explanation thanks!
ОтветитьWhat I don’t get is: say you’ve picked the 1st bin as your test set for the first run and the rest as your training set. Hasn’t the model learned everything in the training set for the rest of the runs? What’s the point of using all the k’s when they’ve already been used before?
Ответитьdo all the 10 folds have to be of the same size? what is the effect if they are of different sizes?
Ответитьk-fold cross validation runs k learning experiences, so at the end you get k different models.... Which one do you chose ?
Ответитьthis video is really usefull, thank you very much.
it help me a lot.
The test bin is different every time, so how do you average the results? Can you please provide a detailed explanation on this?
ОтветитьIt's obvious that the result are : train/test, train/test, and then cross validation.
cross validation run the program "k" times so it's "k" time slower , but one the other hand is more accuracy.
I think the answers are train/test, train/test, and then 10-fold C.V. Also, don't make a video with some nasty open sore on your hand please. Wear a glove or something.
Ответитьwhat do you mean by data points, you mean instances ?
Ответить