K-Fold Cross Validation - Intro to Machine Learning

K-Fold Cross Validation - Intro to Machine Learning

Udacity

9 лет назад

445,864 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

ijyoyo
ijyoyo - 15.10.2022 22:19

Interesting video. Thanks for sharing.

Ответить
Patrick Doherty
Patrick Doherty - 09.05.2022 03:00

That huge popped blister on his hand is lowkey distracting

Ответить
Steve
Steve - 19.04.2022 23:26

You miss a part of your skin Sir

Ответить
Kia S
Kia S - 11.01.2022 14:32

The voice sounds like Sebastian Thrun. Great guy :)

Ответить
Gary Butler
Gary Butler - 06.09.2021 12:09

1. Train/Validation 2. Either 3. K-Fold CV
I've seen a lot of answers I disagree with in the comments, so I'll explain. First, the terminology is Train/Validation when used to train the model. The Test set should be taken out prior to doing the Train/Validation split and remain separate throughout training. The Test set will then be used to test the trained model. Second, the answers. 1. Obviously training will take longer doing it 10 times. 2. While training did take longer, you are actually running the same size model in production. All other things being equal the run times of both already trained models should also be equal. 3. The improved accuracy is why you would want to use K-Fold CV.

If I'm wrong, please explain. I'll probably never see your comments, but you could help someone else.

Ответить
JvN
JvN - 18.07.2021 04:38

First of all, cover your disgusting-looking wound. Second, explain to people why cross-validation is needed in the first place and who needs it.

Ответить
Yogeshwar Shendye
Yogeshwar Shendye - 15.04.2021 18:49

won't this make the model specialized for the data that we have??

Ответить
Fellipe Alcantara
Fellipe Alcantara - 08.04.2021 21:27

can't watch it... the blister is too anoying

Ответить
Ahmed Gamal
Ahmed Gamal - 24.01.2021 12:39

Can anyone share the answers for those questions please

Ответить
Sergio Gaitan
Sergio Gaitan - 01.01.2021 23:14

Like because the blister made with a barbell.

Ответить
Moath Budget
Moath Budget - 26.10.2020 17:20

your pen is a disaster

Ответить
Julian Beck
Julian Beck - 26.04.2020 10:10

ihhhhh

Ответить
Reed Sutton
Reed Sutton - 12.04.2020 21:28

This is incorrect. You should correct this video, as you're encouraging people to mix their train and test sets, which is a cardinal sin of machine learning. Every time you say test set, you should be saying validation set. Test set can only be tested one time, and cannot be used to inform hyperparameters.

Ответить
MOHA Shobak
MOHA Shobak - 09.03.2020 15:07

So is this supervised, unsupervised or semi-supervised algorithm?

Ответить
DoughBoY
DoughBoY - 26.11.2019 17:39

Why is it so hard to find a simple, concrete, and by hand example of simple k cross validation? All the documentation I can find is very generalized information, but no practical examples anywhere.

Ответить
Shweta Redkar
Shweta Redkar - 20.11.2019 11:33

Here in K fold CV, A model in each fold computes an average result. So entire 10 fold CV is an average of average? What does it mean by 5 times 10 fold cv? How it is different from the normal 10 fold CV? Can someone help me understand this?

Ответить
Lion Heart
Lion Heart - 12.11.2019 16:19

do a simple practical example by hand, not just theory always. People understand better when there are actual numbers and you go through the entire procedure, even if its a trivial example.

Ответить
Whaling With Ishmael
Whaling With Ishmael - 09.11.2019 04:35

Simple and beautiful

Ответить
B Biss
B Biss - 22.07.2019 22:32

can't stop looking at the blister on his hand

Ответить
sumit dam
sumit dam - 18.06.2019 10:05

Can anybody provide me the video link which describes the training and test sets by Mrs. Katie ?

Ответить
Samuel Abächerli
Samuel Abächerli - 18.06.2019 08:57

But this doesn't solve the issue of choosing the bin size, i.e. trade-off between training set and test set (although you are now using all the data for both tasks at some point).

Ответить
omid asadi
omid asadi - 26.05.2019 11:23

Hi bro, thanks for sharing this learning..... ??just a question?? with which application do you make this tutorial? it's amazing... your text came on and above your hand.

Ответить
B Arslani
B Arslani - 19.04.2019 16:11

Thanks, clear

Ответить
Gautam Kishore Shahi
Gautam Kishore Shahi - 14.04.2019 02:03

I have a small dataset of 48 samples if I have applied MLP using 6-fold, Do I still need validation set to avoid the biased result on the small dataset? Please suggest.

Ответить
Tony Zh
Tony Zh - 10.02.2019 17:57

Thanks for the video! Quick(silly) Question: in any of those validation methods, every time you change training data, are you going to re-fit the model? If so, every time validating step is respect to different model fit. Then how you determine your final model decision?

Ответить
باسل بن عبد الله
باسل بن عبد الله - 28.01.2019 00:28

thank you very much , that video is helpful ..

Ответить
Weize Yin
Weize Yin - 02.12.2018 09:11

hey guys from ECON704

Ответить
R Lalduhsaka
R Lalduhsaka - 01.12.2018 15:00

so, what is the difference with test_train_ split with test size=0.1

Ответить
Andrew Tseng
Andrew Tseng - 12.11.2018 22:58

the hand is so annoying

Ответить
Paul Issac
Paul Issac - 06.11.2018 08:44

what are you even saying? can't understand anything!

Ответить
ytber
ytber - 24.10.2018 12:48

i know its very old video but still its not necessary to show your hand while writing

Ответить
DrCafeine
DrCafeine - 10.08.2018 15:45

Nice !

Ответить
K
K - 08.02.2018 20:15

Interesting to see that your video presented like this, mind to share how do you present your drawing like this?

Ответить
Ryan Mccauley
Ryan Mccauley - 03.01.2018 16:49

Great explanation thanks!

Ответить
The Siberian
The Siberian - 03.01.2018 10:20

What I don’t get is: say you’ve picked the 1st bin as your test set for the first run and the rest as your training set. Hasn’t the model learned everything in the training set for the rest of the runs? What’s the point of using all the k’s when they’ve already been used before?

Ответить
Randa
Randa - 30.10.2017 17:24

do all the 10 folds have to be of the same size? what is the effect if they are of different sizes?

Ответить
Guillaume ROLLAND
Guillaume ROLLAND - 18.10.2017 13:27

k-fold cross validation runs k learning experiences, so at the end you get k different models.... Which one do you chose ?

Ответить
MeiFaa Roleplay
MeiFaa Roleplay - 02.10.2017 18:12

this video is really usefull, thank you very much.
it help me a lot.

Ответить
snk2288
snk2288 - 16.07.2017 04:24

The test bin is different every time, so how do you average the results? Can you please provide a detailed explanation on this?

Ответить
Dor Solomon
Dor Solomon - 17.05.2017 12:32

It's obvious that the result are : train/test, train/test, and then cross validation.
cross validation run the program "k" times so it's "k" time slower , but one the other hand is more accuracy.

Ответить
Alex Chow
Alex Chow - 25.02.2017 04:22

I think the answers are train/test, train/test, and then 10-fold C.V. Also, don't make a video with some nasty open sore on your hand please. Wear a glove or something.

Ответить
Oliver Young
Oliver Young - 07.09.2015 02:12

what do you mean by data points, you mean instances ?

Ответить