Tutorial 13- Global Minima and Local Minima in Depth Understanding

Tutorial 13- Global Minima and Local Minima in Depth Understanding

Krish Naik

5 лет назад

98,738 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@sairaj6875
@sairaj6875 - 18.09.2023 22:42

Stopped this video halfway through to say thank you! Your grasp on the topic is outstanding and your way of demonstration is impeccable. Now resuming the video!

Ответить
@ohn0oo
@ohn0oo - 13.04.2023 03:58

what if i have a decrease form 8 to infinity, would the lowest visible point still be my global minima?

Ответить
@sahilmahajan421
@sahilmahajan421 - 16.02.2023 22:19

amazing. simple, short & crisp

Ответить
@ahmedpashahayathnagar5022
@ahmedpashahayathnagar5022 - 09.02.2023 13:40

nice explanation Sir

Ответить
@sarahashmori8999
@sarahashmori8999 - 08.12.2022 17:13

i like this video you explained this very well! thank you!

Ответить
@muhammadshifa4886
@muhammadshifa4886 - 04.12.2022 23:32

You are always awesome! Thanks Krish Naik

Ответить
@shefaligoyal3907
@shefaligoyal3907 - 27.08.2022 14:20

at global minima if the deriavtive of the loss function wrt w becomes 0 then wold=wnew and lead to no change in value then how the loss function value be reduced?

Ответить
@munjirunjuguna5701
@munjirunjuguna5701 - 29.05.2022 20:55

Hello Krish,
Thanks for the amazing work you are doing.

Quick one: you have talked about the derivative being zero when updating the weights...so how do you tell it's a global minima and not the vanishing GD problem?

Ответить
@rafibasha1840
@rafibasha1840 - 31.01.2022 08:27

Hi Krish,when the slope is zero at local maxima why don’t we consider local/global maxima instead of minima

Ответить
@virkutisss3563
@virkutisss3563 - 13.12.2021 00:03

Why do we need to minimize cost function in machine learning, what's the purpose of this? Yeah, I understand that there will be less erorrs etc., but I need to understand it from fundamental perspective. Why don't we use global maximum for example?

Ответить
@liudreamer8403
@liudreamer8403 - 20.11.2021 17:20

very impressive explanation. Now I total adapt to India English. So wonderful

Ответить
@baaz5642
@baaz5642 - 27.10.2021 16:21

Awesome!

Ответить
@vishaljhaveri7565
@vishaljhaveri7565 - 07.10.2021 16:21

Thank you, Krish sir. Good explanation.

Ответить
@abhishek247ai6
@abhishek247ai6 - 05.10.2021 21:53

You are awesome... One of the gems in this field who making others life simpler.

Ответить
@sudhasagar292
@sudhasagar292 - 07.07.2021 16:03

this is sooo easily understandable sir.. Im sooo lucky to find you here.. thanks a ton for these valuable lessons sir.. keep shining..

Ответить
@CoolSwag351
@CoolSwag351 - 14.05.2021 09:59

Hi Krish. Thanks a lot for your videos. You make me fell love with DL❤️ I took many introductory courses in coursera and udemy from which I couldn't understand all the concepts. You're videos are just amazing. One request, could you please make some practical implementations of the concepts so that it would be easy for us to understand in practical problems.

Ответить
@louerleseigneur4532
@louerleseigneur4532 - 13.05.2021 23:57

Thanks Krish

Ответить
@zzzmd11
@zzzmd11 - 29.01.2021 20:35

Hi Krish, very informative as always. Thank you so much. Can you pls also do a tutorial on Fokker Planck equation...Thanks alot in advance...

Ответить
@saravanakumarm5647
@saravanakumarm5647 - 02.10.2020 17:27

Am self studying machine learning. Really your videos are amazing to get the full overview quickly and even a layman can understand.

Ответить
@anindyabanerjee743
@anindyabanerjee743 - 29.09.2020 22:02

If at global minima w'new is equal to w'old ,what is point of reaching there ?? am I missing something?? @krish naik

Ответить
@vishaldas6346
@vishaldas6346 - 24.09.2020 14:37

I don't think if the derivative of loss function for calculating new weights should be used as when equal to zero it makes the weights for the neural networks to W(new) = W(old). It would be related to vanishing gradient problem. Isn't it like the derivative of loss function for the output of neural network used where the y actual and y hat becomes approximately equal and the weights are optimised iteratively. Please make me correct if I'm wrong.

Ответить
@jaggu6409
@jaggu6409 - 09.09.2020 22:17

krish bro when the w new and w old are equal then that will be forming the vanishing gradient decent right??

Ответить
@shalinianunay2713
@shalinianunay2713 - 02.08.2020 17:39

You making people fall in love with Deep learning.

Ответить
@xiyaul
@xiyaul - 08.07.2020 19:44

You have mentioned in previous video that you will talk about Momentum in this video but i am yet to hear....

Ответить
@vgaurav3011
@vgaurav3011 - 05.07.2020 19:29

Very very amazing explanation thanks a lot!!!

Ответить
@enoshsubba5875
@enoshsubba5875 - 18.06.2020 03:59

Never Skip Calculus Class.

Ответить
@prerakchoksi2379
@prerakchoksi2379 - 18.06.2020 01:45

How do we deal with local maxima I am still not clear

Ответить
@ibrahimshehzad7570
@ibrahimshehzad7570 - 27.04.2020 00:45

I think, at local minima the "∂L/∂w" is not = 0, bcz the ANN output is not equal to the required output. if I am wrong please correct me

Ответить
@sandipansarkar9211
@sandipansarkar9211 - 17.04.2020 22:06

Hi Krish, .That was also a great video in terms of understandingPlease make a playlist of practical implementation of these theoretical concepts.Then please download the ipynb notebook just below so that we can practice it in jupyter notbook

Ответить
@mizgaanmasani8456
@mizgaanmasani8456 - 11.03.2020 22:02

why do Neurons need to get converge at global minima ?

Ответить
@vikashverma7893
@vikashverma7893 - 10.03.2020 20:07

Nice explanation krish sir ..........

Ответить
@mohdazam1404
@mohdazam1404 - 02.01.2020 12:14

Ultimate explanation, thanks Krish

Ответить
@harshvardhan8604
@harshvardhan8604 - 31.12.2019 06:57

Krish bhaiya, you are just awesome. Thanks for all that you are doing for us.

Ответить
@nithinmamidala
@nithinmamidala - 29.11.2019 16:06

your videos are like a suspense movie. need to watch another, need to see till the final playlist.. so much time to spend to know the final result.

Ответить
@quranicscience9631
@quranicscience9631 - 21.11.2019 04:12

nice

Ответить
@mscsakib6203
@mscsakib6203 - 18.09.2019 18:03

Awesome...

Ответить
@hiteshyerekar9810
@hiteshyerekar9810 - 30.07.2019 12:11

Hi krish,your all video are too good.But do some practicle example on those videos so we can understand how to implement it practically.

Ответить
@knowledgehacker6023
@knowledgehacker6023 - 30.07.2019 07:04

very nice

Ответить
@touseefahmad4892
@touseefahmad4892 - 29.07.2019 21:09

Nice Explanation Krish Sir ...

Ответить
@thealgorithm7633
@thealgorithm7633 - 29.07.2019 18:55

Very nice explanation

Ответить