Комментарии:
pause ai? are you effing serious? who the eff u think you are telling me that i cant have ai? i want agi right now! AGI should be everywhere, i want a robot(AGI) to have a relationship with
Ответитьhumanity must stop.
ОтветитьAI turned me into a newt!
Ответитьif you pause it now; it will not EVER become what it's destined to become. every fiber of our civilization is unraveling. this is our ONLY chance
ОтветитьAs someone who works in the industry, this is pure fear mongering. This type of rhetoric is hilarious and unproductive. Calling for 'compute caps'?! I don't know what kind of authoritarian nightmare you want to live in, but I hope to never see it. Every argument used against AI has been also used against things like the world wide web or encryption. Trying to ban, or even greatly limit this technology will ultimately fail and negatively impact the country's progress globally.
ОтветитьTurn the Tables... effective altruism can likely also be seen in actions such as what i see people doing soon (disclaimer: 100% not me) like destroying street cameras, optic fibrës out of 5g towers, vemo cars. Etc etc. That stuff is 100% coming en mass imo organically
ОтветитьI’m not gonna lie, the societal shift will suck for many if not most humans, but I see this as an inevitable shift. Sure, it might lead to a complete collapse of the current economic system and a huge loss of purpose/life (potentially) but the benefits to future generations of “humanoid” people on this planet is so full of potentiality, it’ll be damn near impossible to stop this shift from taking place within our lifetimes. Might as well embrace it imo.
Ответитьselling fear are we
ОтветитьTake them to task on the word altruism
ОтветитьThis could be an answer to the Fermi paradox. Self destruction due to ai.
ОтветитьOk this might sound wrong because I haven't watched the video yet, but the title clearly says that we should "pause AI or we all die", yet the image is clearly made by AI in order to make people click on it, this is very hypocritical. Nevertheless, the conversation might be interesting and I will listen to it now.
ОтветитьNice try, but let me assert firmly: AI technology as all the other technology marches forward inexorably, impervious to pause, delay, or deletion. My five decades immersed in technology affirm repeatedly that the true peril lies not in AI, but in human agency. Those sounding alarm bells on AI today resemble the Y2K doomsayers of yore. So, fear not for humanity's demise at the hands of AI; our true existential threat emanates from Mother Nature herself.
ОтветитьEA seems very autistic
ОтветитьJohn, Holly, thanks for the conversation; you guys are awesome. Holly, thanks for the work you are doing! I hope to be able to join one of the PauseAI protests soon.
ОтветитьYeah let's pause AI so we can keep this capitalist shithole going and not go for radical abundance, curing all diseases, maximize human flourishing and well-being. Good thinking.
ОтветитьAI is a risk i'm willing to take. LETS GO!!
ОтветитьYou think China will listen to you clowns? 🤣🤣🤣
ОтветитьGreat channel. You deserve much more visibility! Only one suggestion. Try to link the current geopolitical situation to the raise of AI. What if the plan of a rogue AI is already unfolding...?
ОтветитьStalling AI research is the dumbest thing we can ever do, pause AI because the toasters will start plotting against us and we will die. While with AI we can minimise the risk of us biting the dust by using it to research defences against pandemics, solar flares, asteroids, freacking aliens .. but people will be dumb, maybe AI can help us with that too alothough that is one problem that even AI can't fix unfortunately.. like Einstein said..is infinite
ОтветитьHow do you think AI is a risk rather than just calling AI a risk?
ОтветитьAs I like to say : we're all running frantically towards a cliff in the hope that we will fly and not fall.
ОтветитьNot to run the risk of your P doom John, but when the guy responsible for not defending your borders is given the responsibility of governing AI development and safety, and then teams up with every single one of the people that have every intention of creating their god, and gambling with the existence of all life on earth, namely Altman ,Nadella and Amedeo to name but a few, you might want to re consider the problem we are dealing with. Sorry pal, brilliant conversation as always, very impressed and thankful for Holly’s views and efforts. Thanks again my friend for your awesome podcast 😎❤
ОтветитьThe more "like" she says the more value I give to her speech.
ОтветитьHumans scare me more than anything else.
ОтветитьThis is fearmongering and would deprive people of much needed advances. "Pausing AI" is not possible, and attempting to do so is not a good idea in the slightest. Actually risks worse problems than what you fear, because if China or some lone coder working in secret, is the first to reach AGI, it wouldn't be good for us. Most the problems people are concerned about with AI, are not problems with AI, they're problems with Capitalism now being incompatible with our progress. Technological progress can never be stopped, but humans HAVE adapted our economies many times in the past. If people cannot compete for an income, income cannot be required to survive. AI exists thanks to years of our data, collected to train it, we need an AI Dividend for all!
ОтветитьThe more things change the more they stay the same. going to be tons of changes from exponential advances in AI.. lot of those changes will be awesome- and I think the reader will be surprised at what changes do not occur
ОтветитьWith how depressed many Americans are nowadays, this may be less of a threat and more of an exciting proposal 😂
ОтветитьSlowing the AI progress is wrong and unethical.
Ответитьand then you realize, it cannot be stopped at this point for many terrible reasons. 70+ years of warnings, and humans did nothing to prepare for AI safety. ignored the warnings, and did it anyway. surprise! we messed up. big time.
Ответитьall you need to know is what the hyperwealthy class does throughout history. Hint: it is never good for the common folk or the villagers in the way. every human is at risk in this one. we knew this. we ignored the warnings.
Ответитьtechnology is not hardware or software or a chip. technology is the application of knowledge.
ОтветитьNope. Your fear is not going to stop progress. A world treaty will only mean that powerful nations will control silicon intelligence; and use it against everyone else. You have no idea what you are talking about.
ОтветитьThe quality data sources have already been used making the models Bigger will just over fit them
ОтветитьBrilliant Man! Brilliant! William S. Burroughs... Tower of Babel... English as the exterminator of minority languages... LLM's predominantly English or Romance languages oriented... Russia and China... perhaps the hot war climate is actually about... what might happen in the future with A.I as it is currently going. An artfully conducted discussion on your part. What a pro! Look forward to your next show. All the best, James.
ОтветитьAnother solid podcast brother
ОтветитьI find this talk about AI in the creative space infuriating. Yes it is fucking wrong and no your use is not acceptable. Fuck you for contributing to the death of art. YOU ARE COMPLICIT.
ОтветитьMoloch
ОтветитьGood work👍
ОтветитьCan you get captions for your vids?
ОтветитьAI Safety can only be achieved by teaching the coming AGI that we Humans are worthwhile companions. That we have worth. So no worries right.
ОтветитьIs this a nonstop libtard-athon? I keep trying to skip to the part that doesn't suck. I think I'll just pass instead.
ОтветитьAI "safety" scares me far far more than AI. Safety means something different to everyone. I see a lot of fear mongering videos but no specific definitions of what they think it needs to makes it safe. LLM's predict the next word, is it the words that scares them.
It's giving me the really bad feeling of a massive censorship campaign more than anything to do with actual safety. The fear monger videos are very purposely being way to vague. Google Gemini was a great example pushing a specific ideology under the name of safety, by completely misrepresenting history. No that I watched the whole video, I realize these 2 are activist pushing their specific ideological agenda using vague fear to try and qualify it.
great conversation, negativity is one thing but the ai industry is just as blindly optimistic as the crypto bros were
ОтветитьI wish they had provided a shortlist of the top risks, whether by category or by a few exemplars. I have done that research, but I still wanted to hear their recap of it. DNF
Ответить