Комментарии:
I think animatediff makes less flickery results? I'm also up against a brick wall with a model I want to use refusing to play ball and remain high quality. Some models work well with animatediff, and some models are ruined or at least quality reduced by it :/ I know not what to do.
ОтветитьI tried the tutorial but just wasn't as consistent as yours. Hmmmmm.....too much flickr.
ОтветитьI cannot get these types of results at all on mine, but I use the same exact settings and lora as well. It just make sit look like my face has a weird filter on it. It won't make my guy cartoony at all
Ответитьamazing workflow thank you for sharing
Ответитьthanks for the work🤩🤩🤩, I HAVE DOUBTS, IN CONTROLNET YOU USE diff_control_sd15_temporalnet_fp16.safetensors, but in your PDF but when you click on the controlnet model in your link it downloads the diffusion_pytorch_model.fp16.safetensors. my question is which model to use, the diff_control_sd15_temporalnet_fp16.safetensors or the diffusion_pytorch_model.fp16.safetensors.
ОтветитьSorry if a silly question but I am new Stable Diffusion. How to access the UI interface you are using in this video? Is it only possible through a local install?
ОтветитьI don't see join an email list anywhere on your website.
ОтветитьI also dont know how to install LCM
ОтветитьI van not find diff_control_sd15temporainet_fp16 for the control net, why is that?
ОтветитьI'm happy to know that one day I'll be able to make a remake of the Shield Hero anime.
ОтветитьAs an animator. I don't know if this is scary or an advantage.
Ответитьand where to find the VAE??? you do nice videos but you should provide all the links so that we can follow your steps
Ответитьamazing video but I can't seem to download ethernaldark_goldenmax.safetensors from tensorart :( any suggestions?
Ответитьniceee
ОтветитьReminds me of the movie A Scanner Darkly, which used interpolated rotoscoping.
ОтветитьI don't see where and how to do the LCM install. I think you left a few things out.
ОтветитьI like how you talked about "occlusion" I think. It is like making a comic book page with bleed on it. Nice to know we have to have bleed on it.
Ответитьgood job bro 🤟❤🔥
ОтветитьWow this is next level art
Ответитьare you using a series of images though stable diffusion than piecing then back together later?
ОтветитьEh - it looks like rotoscoping. Think Heavy Metal scenes, Wizards, and Sleeping Beauty. It's too real for fake and too fake for real.
ОтветитьYour animations have great style! Thanks for sharing your know-how.
ОтветитьThanks for the tutorial, I have subscribed, this is really useful for our AI film development
ОтветитьAMAZING!
did you make all the intro clips?
wait wait wait. you're telling me you could have your character face forward and generate textures for their face?
ОтветитьLMS test to LMS karras? best tutorial...no xiti talking
ОтветитьThabks for rhe tutorial, never used those Control net inits before, been trying with Canny and OpenPose. This has been very useful. Any idea of how can we deflicker the animation without Davinci? Either something free of cheap. Thabks in advance.
ОтветитьIs it? I just tested a famous online A.I. image to video site, and the results were terrible. For example, I uploaded a still cut of a Japanese animation where a boy and a girl were on a slope. I generated two videos and in both videos weird things happen, like their front truned into back. It was unusable.
ОтветитьI need to upgrade my rtx 2070. Generating with that high pixel took only minutes 😭
Ответитьyou didnt tell us where to get eternal dark
ОтветитьI get an error, when trying to change those .py files, also there might be an error in the instruction (Add this code at line 5, it should take up lines 5 to 18 once pasted.) when i paste this code i get more lines, 5-19
ОтветитьThis looks awesome. Where do u learn about loras & vae's . I heard them get mentioned but have no clue?
ОтветитьAI feels more like simply morphing one image into another, then actually pushing the feeling of motion that professional artist put into their work/art. AI creates perfect volume per drawing and then tries to cover it up using some kind of wiggly lines to make it feel a bit more handdrawn or something . The outcome is better then what most badly done content looks like, but it will never fully replace properly done animation by artists who actually have put in the effort of mastering the required skills. It will always be a tool that will steal from these artists to generate something that gets close but not quite there yet, for years now... At least this particular craft seems safe from being taken over. It will just end up being another style/preference of animation, when using untrained eyes it looks amazing. : )
ОтветитьI prefer traditional 3D animation, it's evolved through time long enough to provide cohesive animations from a frame to the next through simulation. I mean... the point of this is reinventing over complete results. Feels redundant. Curious experiment though
Ответитьlove this, would love to see a tutorial about how you input the video and batch rendered the whole thing to match the style you created.
ОтветитьIs getting wild
ОтветитьOh wow! It's very impressive how SD continues to develop!
BIG FAT FANX for that video!
where's the ethernaldarkmix_goldenmax model?
ОтветитьVery good tutorial. Thanks. More tutorials on A1111 and video/animation are most welcome. My only slight criticism is some of it felt a bit rushed to me. A little more, and slower, explanation might help in parts. I will check back on your channel though as very helpful. Keep up the great work!
ОтветитьHoo boy... whens the first AInime coming out
ОтветитьWe getting there, consistency of new Stable Video model is way better than any competition.
Ответитьnice! can you make tutorial using cascadeur to stable diffusion?
ОтветитьVery nice, but how can I get this page on the web, whether on computer or phone, because I am new to that
ОтветитьPrediction: As convoluted as this process seems now, in the next 60-90 days stable diffusion will have text description to animation as regular LLM models, no different than image LLM models today
ОтветитьI've never published anything, but I got some decent temporal stability, in a lower resolution, with control net and char turner + inpainting style
Especially for your helmet scene, with all of the various blender cutouts...
you generate your character face, then the second frame would be on the left the whole previously generated frame, on the right, the frame you need to generate now.
using inpainting masks I focused on that right side, using the previous frame, or a master frame for the left side control.
and sometimes using controlnet, sometimes without, but char+turner worked a treet.
Great stuff! Thanks for sharing.
ОтветитьSooooo close to what I need
ОтветитьHow can i find the link to EthernalDark safetensor
Ответитьamazing workflow thank you for sharing
Ответить