Neural Networks Part 6: Cross Entropy

Neural Networks Part 6: Cross Entropy

StatQuest with Josh Starmer

3 года назад

231,464 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@ferminbereciartua6432
@ferminbereciartua6432 - 01.12.2023 16:52

you rock josh!!

Ответить
@jacobcrowley8207
@jacobcrowley8207 - 28.11.2023 18:57

Thank you, this makes sense now.

Ответить
@xxxiu13
@xxxiu13 - 20.11.2023 21:47

Great explanation in an entertaining way. Bam!

Ответить
@Ram-oj4gn
@Ram-oj4gn - 12.11.2023 11:35

As you said in the current video, if the cross-entropy function helps us more in the gradient descent process than what Sum of squared function does, why don't we use the same cross-entropy for the optimization of linear models such as Linear regression also.. why we use SS there and not entropy ? Thank you for the wonderful videos to understand the math and functions ..

Ответить
@shishi1826
@shishi1826 - 01.11.2023 19:48

I'm in a stats phd program and we had a guest speaker last week. During the lunch time the speaker asked us which course we liked most in our school, one of my classmates said actually no, he likes statquest "course" the most. And I was like nodding my head 100 times per minute. We discussed like why US universities hire professors good at research but not hire professors good at teaching, why there is no tenure-tracked teaching position......US education system really needs to change

Ответить
@pabloagullomarti9817
@pabloagullomarti9817 - 11.10.2023 17:08

I am currently starting my bachelor's thesis on particle physics and I was told that a big part of it consists in running a neural network with pytorch. Your videos are really really useful and thanks to you I have at least have a vague idea on how a NN works. Looking forward to watch the rest of your Neural Networks videos!! TRIPLE BAM!!

Ответить
@supersnowva6717
@supersnowva6717 - 18.09.2023 18:50

I would not be able to get how neural networks fundamentally work without this series. Thank you so much Josh! Amazing and clear explainations!

Ответить
@satyamgupta4808
@satyamgupta4808 - 10.09.2023 03:32

very very intuitive and very great explanation

Ответить
@meeseeks1489
@meeseeks1489 - 03.09.2023 17:49

What is this guy made of??? what does he eat??? Are you a God?? An alien?? You are so smart and dope man!!! How do you do all this? He should be a lecturer at MIT! SO underrated content💞💞💞💞💞💞

Ответить
@miki77YT
@miki77YT - 30.08.2023 01:35

bam

Ответить
@onebyzero-4608
@onebyzero-4608 - 24.08.2023 01:37

why observed probability is 1 in case of setosa , kind of confusing still ?

Ответить
@oldcowbb
@oldcowbb - 12.08.2023 04:35

still trying to wrap my head around how is this related to entropy if entropy is the expected surprise, it's like we are using a different distribution for the surprise and the expected value

Ответить
@josyulaprashanth2976
@josyulaprashanth2976 - 10.08.2023 20:53

I just can’t believe how you opened my eyes. How can you be so awesome 👌👌. Sharing this knowledge for free is amazing.

Ответить
@Piccadilly_
@Piccadilly_ - 03.08.2023 00:34

Thank you for this video! It and others helped me pass my exam! :D

Ответить
@phattailam9814
@phattailam9814 - 16.06.2023 08:23

thank you so much for your explanation!

Ответить
@amnont8724
@amnont8724 - 09.06.2023 23:55

Hey Josh, I saw one of your videos about entropy in general - which is a way to measure uncertainty or surprise. Regarding Cross Entropy, the idea is the same - but now it's for the SoftMax outputs for the predicted Neural Network values?

Ответить
@yourfutureself4327
@yourfutureself4327 - 03.06.2023 01:26

💚

Ответить
@AhmedKhaliet
@AhmedKhaliet - 19.05.2023 14:12

Wow , I feel when I say thank you it's nothing in compare with what you do ! Very impressive❤❤

Ответить
@hashbrwn1339
@hashbrwn1339 - 19.05.2023 09:17

really good explanation .Difference of squared error vs cross entropy is very well explained .

Ответить
@Noor.kareem6
@Noor.kareem6 - 15.05.2023 02:46

Argmax and Softmax are using only with classification? Or could use them also with regression?
And same question for cross entropy is used for classification only?
Thank you

Ответить
@user-sd4fi5em1g
@user-sd4fi5em1g - 14.05.2023 11:52

Just wow, thumbs up, great explanation sir

Ответить
@Xayuap
@Xayuap - 18.04.2023 04:50

🎉

Ответить
@skillato9000
@skillato9000 - 09.04.2023 20:06

Ok but why do we need to calculate cross-entropy?

Ответить
@user-xn7ot7ij7d
@user-xn7ot7ij7d - 20.03.2023 06:31

Your video makes my mind triple BAM!!

Ответить
@Snipknot57500
@Snipknot57500 - 17.03.2023 20:48

Why not saying that SSR is more adapted for regression while cross entropy fits better for classification?
Awesome video though!

Ответить
@sanchibharti858
@sanchibharti858 - 03.02.2023 15:46

What an amazing video. Never found any content or video better than this one anywhere on this topic. Thank you so much.

Ответить
@muhammadumarsotvoldiev8768
@muhammadumarsotvoldiev8768 - 09.01.2023 08:59

It's amazing !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Ответить
@jiaqint961
@jiaqint961 - 25.11.2022 15:53

Gold!

Ответить
@elielberra2867
@elielberra2867 - 15.10.2022 04:20

Thank you so much! Your videos are always very clear and easy to follow :)

Ответить
@gwinnifer4609
@gwinnifer4609 - 09.10.2022 20:50

Hey Josh!
the way you teach is incredible. THANKS A LOT!❤

Ответить
@mukulbarai1441
@mukulbarai1441 - 02.10.2022 12:58

I was predetermined that I would need to watch several videos to grasp this concept. OMG!! You have explained it so intuitively. Thanks a lot for saving my time and energy.

Ответить
@amiralikhatib4843
@amiralikhatib4843 - 03.09.2022 17:34

😍😍😍😍😍😍

Ответить
@nathanzorndorf8214
@nathanzorndorf8214 - 26.08.2022 18:22

Thank you. Another complicated topic made simple. !!!!

Ответить
@PrasadHonavar
@PrasadHonavar - 23.08.2022 20:28

This is a amazing video

Ответить
@beshosamir8978
@beshosamir8978 - 14.08.2022 05:46

but in a Regression problem we still using SSR right ?
so what will happen if we still using SSR in a Classification problem and after the backpropagation ends his work Check for the maximum output ? is that because if we have an output = 1.64 and the observed = 1 it also tends to decrease the distance so we needed to invent a function to control what is the min and maximum value ? in our case 0 and 1

Ответить
@petercourt
@petercourt - 09.08.2022 12:13

Really well explained! Thanks Josh :)

Ответить
@tiago9617
@tiago9617 - 21.06.2022 13:38

The hero we wanted, and the hero we needed, StatQuest...

Ответить
@franciscoruiz6269
@franciscoruiz6269 - 15.06.2022 18:07

MY Friend! I'm on part 6. How can I learn the differences between LSTM and Bi-LTSM and Recursive Neural Network?

Ответить
@lindaaa3299
@lindaaa3299 - 06.06.2022 23:01

I love this this way of learning!

Ответить
@gbchrs
@gbchrs - 05.06.2022 11:21

amazing work, can't wait to start on the book once I finish all your videos

Ответить
@neoklismimidis6403
@neoklismimidis6403 - 03.06.2022 23:16

Hello Josh!

I have to say WOW!! I love every single of your videos!! They are so educational. I recently started studying ML for my master's degree and from the moment I found your channel ALL my questions that I wonder get answered! Also, I noticed that u reply to every post in the comment section. I am astonished.. no words. A true professor!

Thanks for everything! Thank you for being a wonderful teacher.

Ответить
@shivangisingh1190
@shivangisingh1190 - 30.05.2022 12:56

Love each and every video by StatQuest. Thank you Josh and team for providing such clear, easy-to-digest concepts with a bonus of fun and entertainment. Quadruple BAM!!!

Ответить
@satya8411
@satya8411 - 17.05.2022 13:20

Kudos!!!! 🙌🏻 BAM!!!!!

Ответить
@skippy1234459
@skippy1234459 - 04.05.2022 20:06

This is great! Thank you!!!

Ответить