Episode #26  - “Pause AI Or We All Die” Holly Elmore Interview, For Humanity: An AI Safety Podcast

Episode #26 - “Pause AI Or We All Die” Holly Elmore Interview, For Humanity: An AI Safety Podcast

For Humanity Podcast

2 месяца назад

4,075 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@faizywinkle42
@faizywinkle42 - 02.05.2024 01:44

pause ai? are you effing serious? who the eff u think you are telling me that i cant have ai? i want agi right now! AGI should be everywhere, i want a robot(AGI) to have a relationship with

Ответить
@erwingomez1249
@erwingomez1249 - 02.05.2024 02:02

humanity must stop.

Ответить
@seanmchugh2866
@seanmchugh2866 - 02.05.2024 03:21

AI turned me into a newt!

Ответить
@colto2312
@colto2312 - 02.05.2024 03:39

if you pause it now; it will not EVER become what it's destined to become. every fiber of our civilization is unraveling. this is our ONLY chance

Ответить
@amdon_
@amdon_ - 02.05.2024 03:56

As someone who works in the industry, this is pure fear mongering. This type of rhetoric is hilarious and unproductive. Calling for 'compute caps'?! I don't know what kind of authoritarian nightmare you want to live in, but I hope to never see it. Every argument used against AI has been also used against things like the world wide web or encryption. Trying to ban, or even greatly limit this technology will ultimately fail and negatively impact the country's progress globally.

Ответить
@rowanwilliams7441
@rowanwilliams7441 - 02.05.2024 03:57

Turn the Tables... effective altruism can likely also be seen in actions such as what i see people doing soon (disclaimer: 100% not me) like destroying street cameras, optic fibrës out of 5g towers, vemo cars. Etc etc. That stuff is 100% coming en mass imo organically

Ответить
@kuakilyissombroguwi
@kuakilyissombroguwi - 02.05.2024 04:00

I’m not gonna lie, the societal shift will suck for many if not most humans, but I see this as an inevitable shift. Sure, it might lead to a complete collapse of the current economic system and a huge loss of purpose/life (potentially) but the benefits to future generations of “humanoid” people on this planet is so full of potentiality, it’ll be damn near impossible to stop this shift from taking place within our lifetimes. Might as well embrace it imo.

Ответить
@atypocrat1779
@atypocrat1779 - 02.05.2024 04:13

selling fear are we

Ответить
@rowanwilliams7441
@rowanwilliams7441 - 02.05.2024 04:31

Take them to task on the word altruism

Ответить
@user-fk4cp9kp9n
@user-fk4cp9kp9n - 02.05.2024 04:50

This could be an answer to the Fermi paradox. Self destruction due to ai.

Ответить
@SimOnTheRoad1
@SimOnTheRoad1 - 02.05.2024 05:28

Ok this might sound wrong because I haven't watched the video yet, but the title clearly says that we should "pause AI or we all die", yet the image is clearly made by AI in order to make people click on it, this is very hypocritical. Nevertheless, the conversation might be interesting and I will listen to it now.

Ответить
@danleclaire8110
@danleclaire8110 - 02.05.2024 06:32

Nice try, but let me assert firmly: AI technology as all the other technology marches forward inexorably, impervious to pause, delay, or deletion. My five decades immersed in technology affirm repeatedly that the true peril lies not in AI, but in human agency. Those sounding alarm bells on AI today resemble the Y2K doomsayers of yore. So, fear not for humanity's demise at the hands of AI; our true existential threat emanates from Mother Nature herself.

Ответить
@bentray1908
@bentray1908 - 02.05.2024 08:18

EA seems very autistic

Ответить
@masonlee9109
@masonlee9109 - 02.05.2024 08:32

John, Holly, thanks for the conversation; you guys are awesome. Holly, thanks for the work you are doing! I hope to be able to join one of the PauseAI protests soon.

Ответить
@Nikolajnen
@Nikolajnen - 02.05.2024 08:52

Yeah let's pause AI so we can keep this capitalist shithole going and not go for radical abundance, curing all diseases, maximize human flourishing and well-being. Good thinking.

Ответить
@blake3606
@blake3606 - 02.05.2024 10:14

AI is a risk i'm willing to take. LETS GO!!

Ответить
@semenerohin4048
@semenerohin4048 - 02.05.2024 10:41

You think China will listen to you clowns? 🤣🤣🤣

Ответить
@mgg4338
@mgg4338 - 02.05.2024 11:39

Great channel. You deserve much more visibility! Only one suggestion. Try to link the current geopolitical situation to the raise of AI. What if the plan of a rogue AI is already unfolding...?

Ответить
@razvanxp
@razvanxp - 02.05.2024 12:20

Stalling AI research is the dumbest thing we can ever do, pause AI because the toasters will start plotting against us and we will die. While with AI we can minimise the risk of us biting the dust by using it to research defences against pandemics, solar flares, asteroids, freacking aliens .. but people will be dumb, maybe AI can help us with that too alothough that is one problem that even AI can't fix unfortunately.. like Einstein said..is infinite

Ответить
@OneRudeBoy
@OneRudeBoy - 02.05.2024 12:21

How do you think AI is a risk rather than just calling AI a risk?

Ответить
@luciengrondin5802
@luciengrondin5802 - 02.05.2024 12:42

As I like to say : we're all running frantically towards a cliff in the hope that we will fly and not fall.

Ответить
@user-cz5gd2yn5o
@user-cz5gd2yn5o - 02.05.2024 13:15

Not to run the risk of your P doom John, but when the guy responsible for not defending your borders is given the responsibility of governing AI development and safety, and then teams up with every single one of the people that have every intention of creating their god, and gambling with the existence of all life on earth, namely Altman ,Nadella and Amedeo to name but a few, you might want to re consider the problem we are dealing with. Sorry pal, brilliant conversation as always, very impressed and thankful for Holly’s views and efforts. Thanks again my friend for your awesome podcast 😎❤

Ответить
@yank31
@yank31 - 02.05.2024 14:06

The more "like" she says the more value I give to her speech.

Ответить
@icykenny92
@icykenny92 - 02.05.2024 17:51

Humans scare me more than anything else.

Ответить
@GrumpDog
@GrumpDog - 02.05.2024 18:03

This is fearmongering and would deprive people of much needed advances. "Pausing AI" is not possible, and attempting to do so is not a good idea in the slightest. Actually risks worse problems than what you fear, because if China or some lone coder working in secret, is the first to reach AGI, it wouldn't be good for us. Most the problems people are concerned about with AI, are not problems with AI, they're problems with Capitalism now being incompatible with our progress. Technological progress can never be stopped, but humans HAVE adapted our economies many times in the past. If people cannot compete for an income, income cannot be required to survive. AI exists thanks to years of our data, collected to train it, we need an AI Dividend for all!

Ответить
@josephboomtv7811
@josephboomtv7811 - 02.05.2024 19:17

The more things change the more they stay the same. going to be tons of changes from exponential advances in AI.. lot of those changes will be awesome- and I think the reader will be surprised at what changes do not occur

Ответить
@BManStan1991
@BManStan1991 - 02.05.2024 19:52

With how depressed many Americans are nowadays, this may be less of a threat and more of an exciting proposal 😂

Ответить
@TheDailySnack
@TheDailySnack - 02.05.2024 23:23

Slowing the AI progress is wrong and unethical.

Ответить
@uk7769
@uk7769 - 02.05.2024 23:52

and then you realize, it cannot be stopped at this point for many terrible reasons. 70+ years of warnings, and humans did nothing to prepare for AI safety. ignored the warnings, and did it anyway. surprise! we messed up. big time.

Ответить
@uk7769
@uk7769 - 03.05.2024 00:02

all you need to know is what the hyperwealthy class does throughout history. Hint: it is never good for the common folk or the villagers in the way. every human is at risk in this one. we knew this. we ignored the warnings.

Ответить
@uk7769
@uk7769 - 03.05.2024 01:08

technology is not hardware or software or a chip. technology is the application of knowledge.

Ответить
@cantonold7014
@cantonold7014 - 03.05.2024 02:22

Nope. Your fear is not going to stop progress. A world treaty will only mean that powerful nations will control silicon intelligence; and use it against everyone else. You have no idea what you are talking about.

Ответить
@nunobartolo2908
@nunobartolo2908 - 03.05.2024 04:17

The quality data sources have already been used making the models Bigger will just over fit them

Ответить
@user-ix7qb4du6k
@user-ix7qb4du6k - 03.05.2024 11:16

Brilliant Man! Brilliant! William S. Burroughs... Tower of Babel... English as the exterminator of minority languages... LLM's predominantly English or Romance languages oriented... Russia and China... perhaps the hot war climate is actually about... what might happen in the future with A.I as it is currently going. An artfully conducted discussion on your part. What a pro! Look forward to your next show. All the best, James.

Ответить
@dr.arslanshaukat7106
@dr.arslanshaukat7106 - 03.05.2024 18:54

Another solid podcast brother

Ответить
@KImussweg
@KImussweg - 07.05.2024 05:41

I find this talk about AI in the creative space infuriating. Yes it is fucking wrong and no your use is not acceptable. Fuck you for contributing to the death of art. YOU ARE COMPLICIT.

Ответить
@liminal6823
@liminal6823 - 09.05.2024 04:22

Moloch

Ответить
@andersfant4997
@andersfant4997 - 12.05.2024 23:35

Good work👍

Ответить
@sammyboiz
@sammyboiz - 23.05.2024 08:56

Can you get captions for your vids?

Ответить
@striderQED
@striderQED - 05.06.2024 12:07

AI Safety can only be achieved by teaching the coming AGI that we Humans are worthwhile companions. That we have worth. So no worries right.

Ответить
@justinkire4658
@justinkire4658 - 07.06.2024 00:23

Is this a nonstop libtard-athon? I keep trying to skip to the part that doesn't suck. I think I'll just pass instead.

Ответить
@photorealm
@photorealm - 08.06.2024 21:21

AI "safety" scares me far far more than AI. Safety means something different to everyone. I see a lot of fear mongering videos but no specific definitions of what they think it needs to makes it safe. LLM's predict the next word, is it the words that scares them.
It's giving me the really bad feeling of a massive censorship campaign more than anything to do with actual safety. The fear monger videos are very purposely being way to vague. Google Gemini was a great example pushing a specific ideology under the name of safety, by completely misrepresenting history. No that I watched the whole video, I realize these 2 are activist pushing their specific ideological agenda using vague fear to try and qualify it.

Ответить
@T61APL89
@T61APL89 - 17.06.2024 20:22

great conversation, negativity is one thing but the ai industry is just as blindly optimistic as the crypto bros were

Ответить
@markplutowski
@markplutowski - 20.06.2024 23:20

I wish they had provided a shortlist of the top risks, whether by category or by a few exemplars. I have done that research, but I still wanted to hear their recap of it. DNF

Ответить