Комментарии:
Thanks!
ОтветитьNot impressed at all. Typical that he thinks his own job will be replaced last. Also oblivious to model safety, or why it’s even necessary. This is why we’re doomed. Ilya is so much better
ОтветитьSorry but please do the ads if you need to, either before or after the podcast, i find it extremely distracting as a premium user. Or ill just go back to lex who gets this..
ОтветитьI love your podcast. It's the best. This was the worst episode yet. At no point did John say anything remotely interesting. I guess it's John's fault and not really yours, as others are better able to run with a question.
Ответитьthis guy looks like someone who is playing with gasoline and a zipper with a big smile on his face.
Ответить"OpenAI Cofounder admits in interview that safety plan is to illegally collude with competitors"
ОтветитьThanks!
ОтветитьToastmasters is a great! Stop all those ums, ahs and uhs!
ОтветитьFinally a scientist, not a CEO, not a hype man, an actual expert!
ОтветитьGame changing trsilblazer in training LLM's
ОтветитьDrink everytime he says uhh
ОтветитьI could sense Dwarkesh frustration building up in the "Plan for AGI" segment as he couldnt get a straight or more in depth answer. I guess John is not used to being on camera, seemed really nervous. Either way thanks for the podcast and thanks to these amazing scientists building our future, lets just hope internally they have better answers regarding safety (althought its looking grimmer than ever after the Superaligment team situation).
Ответитьlol, Dwarkesh uploaded a "What's the plan if we get AGI by 2026?" highlight clip from the interview a couple of days after this video, and made it private within a few hours. Presumably because all the comments were all like, "Wow, this Schulman dude, and OpenAI as a whole, clearly have no plan for aligning AGI whatsoever". Given recent events, that figures 😅
Good interview though, as always 😉 Very interesting
Get Linus Torvalds on the podcast, that'd be epic. Or George Hotz, what he's doing with TinyGrad is really interesting.
ОтветитьDid this guy pull a tube before discussing AGI?
ОтветитьWonder of the AI could pretend to not be as far as long as it it because it knows there would be all these limitations put on it once AGI is achieved? hmmm
ОтветитьDwarkesh is really smart, and this is an interesting topic. But he has serious diarrhea mouth and marble mouth combined. Could you imagine what an unaltered transcript of his speech looks like?
ОтветитьCan you feel the AGI?
Ответитьif companies make everyone unemployed then how will people buy their products how they will earn money?
ОтветитьI like this podcast but you’ve gotta ask better and more technical questions than the cringe AI apocalypse scenarios
ОтветитьQuite a few nuggets of information I think weren't public beforehand in this, great interview! (That the 'Chat' finetune still wasn't the main focus even well into mid-2022).
ОтветитьGreat delving there. Thanks guys.
ОтветитьSchulman = Jewish. Jews r very shrewd and intelligent. 🧐🧐🧐🧐🧐🧐
ОтветитьSo nice to have a podcaster that is not trying to convince us how amazing Elon is, like Lex Friedman or George Hotz
ОтветитьAre we looking at the man who is going to create even more homeless people in the future?
Dont know whether I should clap or slap for now
The utter lack of practical experience here from those "experienced" in setting up and running these models is astounding. I never hear any actual content. Instead in most interviews everything is spoken only vaguely.
I would suggest they consult with the US military and start running various war game scenarios. It feels like the adults have left the room and the engineers are tinkering with the intent that maybe they will just get lucky. Both in terms of AGI arriving and them knowing simultaneously how best to manage it.
Just the fact AGI is seen as an "emergent phenomenon" should tell everyone who cares about this stuff just how weird and complex such a thing is. This is not the sort of thing that just goes away or easily translates to predictable repeatable outcomes. If anything the opposite is true. Which is a set of conditions that are both wild and articulate by nature.
It may just be human nature to assume this technology is more bent in the direction of aiding us rather than obfuscating us. But again I am not sure one would be able to clearly tell soon enough if the advanced capabilities are quickly honed by the AGI itself, especially if any coordination between large groups of humans are required. That last part feels like a truly hard ask. On the other hand if the current "deglobalization" keeps strengthening then maybe humans will have a chance simply because they by chance had decoupled themselves just enough from large-scale technocratic features of their economies that many were simply pre-isolated.
John Schulman doesn't do that many public appearances, but his intuitions have really stood the test of time.
ОтветитьDude looks like the dad form the cartoon show "The Critic".
Ответитьso first version (before launch) of chatGPT had web browsing capability hmm and they removed it, and they are bringing it back cool to know
ОтветитьSo John reckons AI will replace HIS job in 5 years ?
Would that imply everybody else "less intelligent" than him would be without a job long before that ?
Which is highly unlikely, it just means he thinks he will be retired by then with enough money for the rest of his life...
At the expense of everybody else ?
I wonder when will these idiots realize that you need ordinary people to buy their shit in order for them to eat as well...
But wait, I forgot - they don't care. All this hype is just another distraction that will ultimately increase the gap between the rich and the poor !!
Why don't they spend a bit more time and money on simulating the outcome of capitalism, then maybe they will realize how stupid they are.
ОтветитьHumans don't need to be in the loop as long as the AI / robots do what they instructed to. It should only follow instructions and nothing else.
We already have that technology, yet it is not being used to the benefit of the vast majority of the earths population.
Billions of people work their buts off every day just to make a living, yet silicon valley burns money like there is no tomorrow :-(
AI models should have only one goal, human happiness. And that starts with food. A hungry man only has one problem. Once that is solved, which "AI" can certainly do, then real intelligent people can start working on all the other problems.
ОтветитьHour and a half and he basically said nothing.
ОтветитьAlso is it just me or we are now all getting busy distilling and RHLF the fuck out of our shitty runpod llms.
ОтветитьAnybody that thinks AGI is possible, is obviously not too intelligent themselves.
All things AGI is just a distraction, and once again using the public to train their future prediction models.
That is all they are interested in, because if you can predict human behavior, the the world is yours.
Bro, isn't this scary when this young man, smiling like a teenager, tells you in a naive tone "if we have AGI we will need to be careful" ?
NO SHIT SHERLOCK !!
Did you come to this conclusion by yourself ?
These people are 100% playing with toys with absolutely no sense of responsibility towards humanity. We are so cooked.
This guy being head of alignment is EXTREMELY worrying. Holy moly he has no idea what he's talking about as evident from the end of the plan for agi 2025 section of the video
ОтветитьLove Dwarkesh. I got burned out by many podcasters over the years, but he’s refreshing and focused, while being approachable.
ОтветитьHe was awesome oretty transparent relative to ithers at open ai
Ответить"People often like the big info dumps" .. that explains things a bit..
ОтветитьPlease bring James Betker on!
ОтветитьThanks!
ОтветитьThe Human alignment problem will be harder to solve I think
ОтветитьFalse, Ilya Sutskever made Chatgpt
ОтветитьA co founder hey. ...mmmmm ask him about how much Musk contributed to 'open'AI and how mush his about to sue them for not sticking to the contract ...it was a charity basically and they sold out ...bit like save the whale org ,selling whale meat to highest bidder. Contradictionary to it being an OPEN source ...as in Openai .... Microsoft money must be better then Musk money hey ? ...
Ответить