John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

Dwarkesh Patel

2 недели назад

112,393 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@desparc
@desparc - 23.05.2024 08:17

Thanks!

Ответить
@erikm9768
@erikm9768 - 22.05.2024 23:17

Not impressed at all. Typical that he thinks his own job will be replaced last. Also oblivious to model safety, or why it’s even necessary. This is why we’re doomed. Ilya is so much better

Ответить
@erikm9768
@erikm9768 - 22.05.2024 22:50

Sorry but please do the ads if you need to, either before or after the podcast, i find it extremely distracting as a premium user. Or ill just go back to lex who gets this..

Ответить
@Questington
@Questington - 22.05.2024 09:30

I love your podcast. It's the best. This was the worst episode yet. At no point did John say anything remotely interesting. I guess it's John's fault and not really yours, as others are better able to run with a question.

Ответить
@ewr34certxwertwer
@ewr34certxwertwer - 21.05.2024 16:59

this guy looks like someone who is playing with gasoline and a zipper with a big smile on his face.

Ответить
@ashlynnantrobus5029
@ashlynnantrobus5029 - 21.05.2024 16:33

"OpenAI Cofounder admits in interview that safety plan is to illegally collude with competitors"

Ответить
@artiep
@artiep - 21.05.2024 15:53

Thanks!

Ответить
@JohnDoe-sy6tt
@JohnDoe-sy6tt - 21.05.2024 12:17

Toastmasters is a great! Stop all those ums, ahs and uhs!

Ответить
@borisrusev9474
@borisrusev9474 - 20.05.2024 15:40

Finally a scientist, not a CEO, not a hype man, an actual expert!

Ответить
@umaananth3602
@umaananth3602 - 19.05.2024 19:46

Game changing trsilblazer in training LLM's

Ответить
@eldersyoungexperience3162
@eldersyoungexperience3162 - 19.05.2024 13:35

Drink everytime he says uhh

Ответить
@sachoslks
@sachoslks - 19.05.2024 00:12

I could sense Dwarkesh frustration building up in the "Plan for AGI" segment as he couldnt get a straight or more in depth answer. I guess John is not used to being on camera, seemed really nervous. Either way thanks for the podcast and thanks to these amazing scientists building our future, lets just hope internally they have better answers regarding safety (althought its looking grimmer than ever after the Superaligment team situation).

Ответить
@someguy_namingly
@someguy_namingly - 18.05.2024 23:50

lol, Dwarkesh uploaded a "What's the plan if we get AGI by 2026?" highlight clip from the interview a couple of days after this video, and made it private within a few hours. Presumably because all the comments were all like, "Wow, this Schulman dude, and OpenAI as a whole, clearly have no plan for aligning AGI whatsoever". Given recent events, that figures 😅

Good interview though, as always 😉 Very interesting

Ответить
@biesman5
@biesman5 - 18.05.2024 23:10

Get Linus Torvalds on the podcast, that'd be epic. Or George Hotz, what he's doing with TinyGrad is really interesting.

Ответить
@TomBouthillet
@TomBouthillet - 18.05.2024 22:14

Did this guy pull a tube before discussing AGI?

Ответить
@triwithms
@triwithms - 18.05.2024 21:03

Wonder of the AI could pretend to not be as far as long as it it because it knows there would be all these limitations put on it once AGI is achieved? hmmm

Ответить
@europa_bambaataa
@europa_bambaataa - 18.05.2024 13:21

Dwarkesh is really smart, and this is an interesting topic. But he has serious diarrhea mouth and marble mouth combined. Could you imagine what an unaltered transcript of his speech looks like?

Ответить
@muffined
@muffined - 18.05.2024 12:46

Can you feel the AGI?

Ответить
@dabbieyt-xv9jd
@dabbieyt-xv9jd - 18.05.2024 11:54

if companies make everyone unemployed then how will people buy their products how they will earn money?

Ответить
@the_real_amir
@the_real_amir - 18.05.2024 02:47

I like this podcast but you’ve gotta ask better and more technical questions than the cringe AI apocalypse scenarios

Ответить
@OliNorwell
@OliNorwell - 18.05.2024 02:13

Quite a few nuggets of information I think weren't public beforehand in this, great interview! (That the 'Chat' finetune still wasn't the main focus even well into mid-2022).

Ответить
@ashh3051
@ashh3051 - 18.05.2024 01:18

Great delving there. Thanks guys.

Ответить
@booksquotes948
@booksquotes948 - 18.05.2024 01:11

Schulman = Jewish. Jews r very shrewd and intelligent. 🧐🧐🧐🧐🧐🧐

Ответить
@CamiloSanchez1979
@CamiloSanchez1979 - 18.05.2024 00:11

So nice to have a podcaster that is not trying to convince us how amazing Elon is, like Lex Friedman or George Hotz

Ответить
@thiruvetti
@thiruvetti - 18.05.2024 00:10

Are we looking at the man who is going to create even more homeless people in the future?
Dont know whether I should clap or slap for now

Ответить
@speciesofspaces
@speciesofspaces - 17.05.2024 23:41

The utter lack of practical experience here from those "experienced" in setting up and running these models is astounding. I never hear any actual content. Instead in most interviews everything is spoken only vaguely.

I would suggest they consult with the US military and start running various war game scenarios. It feels like the adults have left the room and the engineers are tinkering with the intent that maybe they will just get lucky. Both in terms of AGI arriving and them knowing simultaneously how best to manage it.

Just the fact AGI is seen as an "emergent phenomenon" should tell everyone who cares about this stuff just how weird and complex such a thing is. This is not the sort of thing that just goes away or easily translates to predictable repeatable outcomes. If anything the opposite is true. Which is a set of conditions that are both wild and articulate by nature.

It may just be human nature to assume this technology is more bent in the direction of aiding us rather than obfuscating us. But again I am not sure one would be able to clearly tell soon enough if the advanced capabilities are quickly honed by the AGI itself, especially if any coordination between large groups of humans are required. That last part feels like a truly hard ask. On the other hand if the current "deglobalization" keeps strengthening then maybe humans will have a chance simply because they by chance had decoupled themselves just enough from large-scale technocratic features of their economies that many were simply pre-isolated.

Ответить
@argh44z
@argh44z - 17.05.2024 22:53

John Schulman doesn't do that many public appearances, but his intuitions have really stood the test of time.

Ответить
@hovz-zo8lf
@hovz-zo8lf - 17.05.2024 21:53

Dude looks like the dad form the cartoon show "The Critic".

Ответить
@fintech1378
@fintech1378 - 17.05.2024 16:06

so first version (before launch) of chatGPT had web browsing capability hmm and they removed it, and they are bringing it back cool to know

Ответить
@Pok3rface
@Pok3rface - 17.05.2024 15:19

So John reckons AI will replace HIS job in 5 years ?
Would that imply everybody else "less intelligent" than him would be without a job long before that ?
Which is highly unlikely, it just means he thinks he will be retired by then with enough money for the rest of his life...
At the expense of everybody else ?
I wonder when will these idiots realize that you need ordinary people to buy their shit in order for them to eat as well...
But wait, I forgot - they don't care. All this hype is just another distraction that will ultimately increase the gap between the rich and the poor !!

Ответить
@Pok3rface
@Pok3rface - 17.05.2024 14:58

Why don't they spend a bit more time and money on simulating the outcome of capitalism, then maybe they will realize how stupid they are.

Ответить
@Pok3rface
@Pok3rface - 17.05.2024 14:55

Humans don't need to be in the loop as long as the AI / robots do what they instructed to. It should only follow instructions and nothing else.
We already have that technology, yet it is not being used to the benefit of the vast majority of the earths population.
Billions of people work their buts off every day just to make a living, yet silicon valley burns money like there is no tomorrow :-(

Ответить
@Pok3rface
@Pok3rface - 17.05.2024 14:51

AI models should have only one goal, human happiness. And that starts with food. A hungry man only has one problem. Once that is solved, which "AI" can certainly do, then real intelligent people can start working on all the other problems.

Ответить
@Pok3rface
@Pok3rface - 17.05.2024 14:40

Hour and a half and he basically said nothing.

Ответить
@Crashrapescrypto
@Crashrapescrypto - 17.05.2024 14:21

Also is it just me or we are now all getting busy distilling and RHLF the fuck out of our shitty runpod llms.

Ответить
@Pok3rface
@Pok3rface - 17.05.2024 14:20

Anybody that thinks AGI is possible, is obviously not too intelligent themselves.
All things AGI is just a distraction, and once again using the public to train their future prediction models.
That is all they are interested in, because if you can predict human behavior, the the world is yours.

Ответить
@adadaprout
@adadaprout - 17.05.2024 13:32

Bro, isn't this scary when this young man, smiling like a teenager, tells you in a naive tone "if we have AGI we will need to be careful" ?

NO SHIT SHERLOCK !!

Did you come to this conclusion by yourself ?

These people are 100% playing with toys with absolutely no sense of responsibility towards humanity. We are so cooked.

Ответить
@greensock4089
@greensock4089 - 17.05.2024 12:33

This guy being head of alignment is EXTREMELY worrying. Holy moly he has no idea what he's talking about as evident from the end of the plan for agi 2025 section of the video

Ответить
@mikey1836
@mikey1836 - 17.05.2024 12:05

Love Dwarkesh. I got burned out by many podcasters over the years, but he’s refreshing and focused, while being approachable.

Ответить
@godmisfortunatechild
@godmisfortunatechild - 17.05.2024 11:53

He was awesome oretty transparent relative to ithers at open ai

Ответить
@language_ai
@language_ai - 17.05.2024 11:06

"People often like the big info dumps" .. that explains things a bit..

Ответить
@modigkrokodil
@modigkrokodil - 17.05.2024 10:27

Please bring James Betker on!

Ответить
@rodomontade
@rodomontade - 17.05.2024 10:18

Thanks!

Ответить
@sir_no_name1478
@sir_no_name1478 - 17.05.2024 08:09

The Human alignment problem will be harder to solve I think

Ответить
@chetubetcha8090
@chetubetcha8090 - 17.05.2024 06:18

False, Ilya Sutskever made Chatgpt

Ответить
@dbarro6723
@dbarro6723 - 17.05.2024 05:00

A co founder hey. ...mmmmm ask him about how much Musk contributed to 'open'AI and how mush his about to sue them for not sticking to the contract ...it was a charity basically and they sold out ...bit like save the whale org ,selling whale meat to highest bidder. Contradictionary to it being an OPEN source ...as in Openai .... Microsoft money must be better then Musk money hey ? ...

Ответить