Комментарии:
Modeling the progress we're seeing in this over what we've seen in image generation is crazy. I can't wait for video to video style transfer being applied to entire clips. I feel that we'll see that by the end of the year. The pace of progress makes me barely able to hold on to my papers!
ОтветитьThats not mona lisa, thats a sleep demon...
ОтветитьThe Pillars of Creation nebula in 3d was stunning, those are 6,500 light years away, and we do not have any other angles to photograph them in order to create a 3d image sequence like that ! o.O
ОтветитьIt's so over
Ответитьneed to train up multimodal LLM film director, cinemaphotographer, composer etc movie making team base agents which output movies that AI film critics review and adversarially train a movieGPT foundation model...
ОтветитьI remember early 2023 when i use sd1.5, ugly photo output. Then end of 2023, we have sdxl and bing AI image with beautiful photo and better hands. I think the end of 2024 will be more amazing
ОтветитьThe best moment of all videos of you are when you say with excitement “what a moment to be alive “, and indeed it’s inspiring
ОтветитьAnd that's 1.0...
ОтветитьWe're very close to see some serious AI video creations/stories being uploaded on the internet
ОтветитьAs a marketer, I feel ambivalent about this. On one hand, a tool like this could revolutionise paid social ads, and allow organisations without the necessary budget to create highly customised video ads. On the other hand, it’s Google, and whatever Google does is usually a black box (even with advertising, you don’t really know what your Performance Max ads are actually doing). This tech is great, but Google’s way of showcasing it without releasing it to the public is rather disappointing.
ОтветитьI would really appreciate a second channel that shows us how to do that stuff by ourselves on our own computers. Because it's not always as easy as download binary and start it up. ^^
ОтветитьThat damn Mona Lisa die let me yawn uncontrollable!
ОтветитьWhen it comes to 24 frames per second, it's just midjourneying 24 outputs at the same time, but coherentness and likeness is preserved 90% so each frame "moves" like a movie
Ответить@CorridorCrew you gotta try this one out next
ОтветитьNightmare Fuel: The Movie
ОтветитьYawn.. Getting AI to do anything which makes sense and/or is creative is as hard as ever before. Mostly it just regurgitates the same kind of crap over and over again.
ОтветитьYeah, film directors and script writers will become obsolete))
ОтветитьGoodbye image privacy rights 😢
ОтветитьLooks like crap though... Lwt me guess, give it another 100 papers! 🤦♂️
ОтветитьMeh
ОтветитьNetflix 2.0 promp: Lord of the rings if it were directed by Tommy Wisaou. With Michael Bay on special effects.
Ответить“Okay Google, fix Star Wars”
Ответитьcreative scholars? you mean artists?
ОтветитьHis way of speaking feels so uneasily uncanny with these overexaggerated rising tones after each sentence section and speaking the words seperately like being an english teacher in a first grade class.
Ответитьall the videos he does are interesting but can someone explain to me why the hell does he talk that way?
ОтветитьYeay ! More big corporation ripping of artists work into their ai model !
ОтветитьThis is already incredibly technologically impressive, beyond what I expected to see in the next 10 years. Now I wonder what we will actually have in 10 years. What about 20? 30?
ОтветитьDoes this channel feature nothing but AI now? Has seriously nothing else advanced in the last 6 months?
ОтветитьOne day we will be able to see a film by choosing which actor is in which role "à la carte casting"
ОтветитьTwo more papers down the line: "Hollywood, are you OK?"
Ответитьexploding pumpkin looks the best
ОтветитьI'm holding on to a whole book now, stuff is too fast.
ОтветитьWhy are you talking like bumblebee
ОтветитьAnyone found a way to try this for yourself?
ОтветитьSo how many papers before it can tell the difference between roaring and yawning?
ОтветитьObviously the next step it to ask it to write an original script for a two hour science fiction movie, create a storyboard for each scene, and then create the final movie.
ОтветитьJust came to say that I hate your blatantly fake clickbait and because of it i haven't watched your videos.
Either show the real deal or make a better effort instead of a blurred image with an unblurred image.
Shameless.
So in 10 years we dont need actor or models. ❤
ОтветитьThese sorts of papers are weird to me. I will never have access to this methodology because it's Google, and even if I did, I would never want to use it because the output is bad and detracts from my creative control. I don't understand what Google thinks people want with text to video technology. Text to image makes sense because image editing techniques are easy to apply to the output to fix flaws, and even so, it's often easier to just start from scratch if you want actual good output. Text to video will never get there, because video editing is complex and will always be complex, and adjusting small elements of an existing clip is never going to be easy to do in a way that looks good or seamless. I worry that Google thinks they can get this tech to the point where the output is usable wholesale, without editing, but that's both obviously not happening and also obviously a terrible thing to attempt.
ОтветитьI was hoping the racoon movie was going to be "Raccacoonie" from "Everything Everywhere All At Once."
Ответитьthis video was made with ai
ОтветитьThe TikTok bots are gonna love this one
ОтветитьFinally I can watch Game of Thrones with the ending it deserved.. and yea, I can wait 5 minutes for that.
ОтветитьI can smell the animators holding onto their fragile jobs
ОтветитьI love how well it is able to hold parallax
ОтветитьI can understand advance tensor matrices but I cannot pronounce your name.
ОтветитьYou got me at the audio synchronisation, because I was not impressed by the rest
ОтветитьThis is too good to believe!
Ответить