Ai Animation in Stable Diffusion

Ai Animation in Stable Diffusion

Sebastian Torres

7 месяцев назад

91,228 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@LucidFirAI
@LucidFirAI - 03.01.2024 09:43

I think animatediff makes less flickery results? I'm also up against a brick wall with a model I want to use refusing to play ball and remain high quality. Some models work well with animatediff, and some models are ruined or at least quality reduced by it :/ I know not what to do.

Ответить
@themightyflog
@themightyflog - 27.12.2023 09:05

I tried the tutorial but just wasn't as consistent as yours. Hmmmmm.....too much flickr.

Ответить
@MisterWealth
@MisterWealth - 24.12.2023 01:36

I cannot get these types of results at all on mine, but I use the same exact settings and lora as well. It just make sit look like my face has a weird filter on it. It won't make my guy cartoony at all

Ответить
@Basicskill720
@Basicskill720 - 23.12.2023 11:52

amazing workflow thank you for sharing

Ответить
@jonjoni518
@jonjoni518 - 20.12.2023 15:03

thanks for the work🤩🤩🤩, I HAVE DOUBTS, IN CONTROLNET YOU USE diff_control_sd15_temporalnet_fp16.safetensors, but in your PDF but when you click on the controlnet model in your link it downloads the diffusion_pytorch_model.fp16.safetensors. my question is which model to use, the diff_control_sd15_temporalnet_fp16.safetensors or the diffusion_pytorch_model.fp16.safetensors.

Ответить
@StoxiaAI
@StoxiaAI - 14.12.2023 13:47

Sorry if a silly question but I am new Stable Diffusion. How to access the UI interface you are using in this video? Is it only possible through a local install?

Ответить
@Sinistar74
@Sinistar74 - 11.12.2023 08:48

I don't see join an email list anywhere on your website.

Ответить
@rilentles7134
@rilentles7134 - 07.12.2023 22:06

I also dont know how to install LCM

Ответить
@rilentles7134
@rilentles7134 - 07.12.2023 21:28

I van not find diff_control_sd15temporainet_fp16 for the control net, why is that?

Ответить
@DanielMPaes
@DanielMPaes - 07.12.2023 19:46

I'm happy to know that one day I'll be able to make a remake of the Shield Hero anime.

Ответить
@mhitomorales4497
@mhitomorales4497 - 07.12.2023 08:09

As an animator. I don't know if this is scary or an advantage.

Ответить
@amkkart
@amkkart - 03.12.2023 12:45

and where to find the VAE??? you do nice videos but you should provide all the links so that we can follow your steps

Ответить
@soma004
@soma004 - 02.12.2023 09:28

amazing video but I can't seem to download ethernaldark_goldenmax.safetensors from tensorart :( any suggestions?

Ответить
@tiffloo5457
@tiffloo5457 - 02.12.2023 02:55

niceee

Ответить
@Skitskl33
@Skitskl33 - 01.12.2023 19:15

Reminds me of the movie A Scanner Darkly, which used interpolated rotoscoping.

Ответить
@themightyflog
@themightyflog - 01.12.2023 14:33

I don't see where and how to do the LCM install. I think you left a few things out.

Ответить
@themightyflog
@themightyflog - 01.12.2023 13:42

I like how you talked about "occlusion" I think. It is like making a comic book page with bleed on it. Nice to know we have to have bleed on it.

Ответить
@user-xy9bg3gq9v
@user-xy9bg3gq9v - 01.12.2023 11:51

good job bro 🤟❤‍🔥

Ответить
@AlexWinkler
@AlexWinkler - 30.11.2023 23:06

Wow this is next level art

Ответить
@LawsOnJoystick
@LawsOnJoystick - 30.11.2023 08:15

are you using a series of images though stable diffusion than piecing then back together later?

Ответить
@AtomkeySinclair
@AtomkeySinclair - 30.11.2023 02:41

Eh - it looks like rotoscoping. Think Heavy Metal scenes, Wizards, and Sleeping Beauty. It's too real for fake and too fake for real.

Ответить
@user-jl4ps7qw4p
@user-jl4ps7qw4p - 30.11.2023 02:30

Your animations have great style! Thanks for sharing your know-how.

Ответить
@TheAgeofAI_film
@TheAgeofAI_film - 29.11.2023 17:22

Thanks for the tutorial, I have subscribed, this is really useful for our AI film development

Ответить
@cam6996
@cam6996 - 29.11.2023 10:12

AMAZING!

did you make all the intro clips?

Ответить
@commanderdante3185
@commanderdante3185 - 29.11.2023 08:46

wait wait wait. you're telling me you could have your character face forward and generate textures for their face?

Ответить
@kanall103
@kanall103 - 29.11.2023 02:19

LMS test to LMS karras? best tutorial...no xiti talking

Ответить
@ArturoVega
@ArturoVega - 29.11.2023 00:47

Thabks for rhe tutorial, never used those Control net inits before, been trying with Canny and OpenPose. This has been very useful. Any idea of how can we deflicker the animation without Davinci? Either something free of cheap. Thabks in advance.

Ответить
@typingcat
@typingcat - 28.11.2023 23:18

Is it? I just tested a famous online A.I. image to video site, and the results were terrible. For example, I uploaded a still cut of a Japanese animation where a boy and a girl were on a slope. I generated two videos and in both videos weird things happen, like their front truned into back. It was unusable.

Ответить
@its4anime
@its4anime - 28.11.2023 15:13

I need to upgrade my rtx 2070. Generating with that high pixel took only minutes 😭

Ответить
@omegablast2002
@omegablast2002 - 28.11.2023 04:10

you didnt tell us where to get eternal dark

Ответить
@AI_mazing01
@AI_mazing01 - 27.11.2023 19:04

I get an error, when trying to change those .py files, also there might be an error in the instruction (Add this code at line 5, it should take up lines 5 to 18 once pasted.) when i paste this code i get more lines, 5-19

Ответить
@armondtanz
@armondtanz - 27.11.2023 18:38

This looks awesome. Where do u learn about loras & vae's . I heard them get mentioned but have no clue?

Ответить
@razorshark2146
@razorshark2146 - 27.11.2023 17:21

AI feels more like simply morphing one image into another, then actually pushing the feeling of motion that professional artist put into their work/art. AI creates perfect volume per drawing and then tries to cover it up using some kind of wiggly lines to make it feel a bit more handdrawn or something . The outcome is better then what most badly done content looks like, but it will never fully replace properly done animation by artists who actually have put in the effort of mastering the required skills. It will always be a tool that will steal from these artists to generate something that gets close but not quite there yet, for years now... At least this particular craft seems safe from being taken over. It will just end up being another style/preference of animation, when using untrained eyes it looks amazing. : )

Ответить
@santitabnavascues8673
@santitabnavascues8673 - 27.11.2023 17:06

I prefer traditional 3D animation, it's evolved through time long enough to provide cohesive animations from a frame to the next through simulation. I mean... the point of this is reinventing over complete results. Feels redundant. Curious experiment though

Ответить
@colehiggins111
@colehiggins111 - 27.11.2023 16:22

love this, would love to see a tutorial about how you input the video and batch rendered the whole thing to match the style you created.

Ответить
@leosmi1
@leosmi1 - 27.11.2023 16:20

Is getting wild

Ответить
@coloryvr
@coloryvr - 27.11.2023 15:21

Oh wow! It's very impressive how SD continues to develop!
BIG FAT FANX for that video!

Ответить
@hurricanepirates8602
@hurricanepirates8602 - 27.11.2023 15:17

where's the ethernaldarkmix_goldenmax model?

Ответить
@art3112
@art3112 - 27.11.2023 13:27

Very good tutorial. Thanks. More tutorials on A1111 and video/animation are most welcome. My only slight criticism is some of it felt a bit rushed to me. A little more, and slower, explanation might help in parts. I will check back on your channel though as very helpful. Keep up the great work!

Ответить
@SirChucklenutsTM
@SirChucklenutsTM - 27.11.2023 11:59

Hoo boy... whens the first AInime coming out

Ответить
@USBEN.
@USBEN. - 27.11.2023 10:32

We getting there, consistency of new Stable Video model is way better than any competition.

Ответить
@shitpost_xxx
@shitpost_xxx - 27.11.2023 09:42

nice! can you make tutorial using cascadeur to stable diffusion?

Ответить
@ToonstoryTV-vs6vf
@ToonstoryTV-vs6vf - 27.11.2023 09:29

Very nice, but how can I get this page on the web, whether on computer or phone, because I am new to that

Ответить
@azee6591
@azee6591 - 27.11.2023 09:27

Prediction: As convoluted as this process seems now, in the next 60-90 days stable diffusion will have text description to animation as regular LLM models, no different than image LLM models today

Ответить
@ProzacgodAI
@ProzacgodAI - 27.11.2023 07:57

I've never published anything, but I got some decent temporal stability, in a lower resolution, with control net and char turner + inpainting style

Especially for your helmet scene, with all of the various blender cutouts...

you generate your character face, then the second frame would be on the left the whole previously generated frame, on the right, the frame you need to generate now.

using inpainting masks I focused on that right side, using the previous frame, or a master frame for the left side control.

and sometimes using controlnet, sometimes without, but char+turner worked a treet.

Ответить
@rangerstudios
@rangerstudios - 27.11.2023 07:41

Great stuff! Thanks for sharing.

Ответить
@BassmeantProductions
@BassmeantProductions - 27.11.2023 06:16

Sooooo close to what I need

Ответить
@tobecooper3090
@tobecooper3090 - 27.11.2023 06:16

How can i find the link to EthernalDark safetensor

Ответить
@pogiman
@pogiman - 27.11.2023 03:51

amazing workflow thank you for sharing

Ответить