Комментарии:
there's nothing like this but with a1111? im afraid if i install comfyUI i'll end up with the whole SD+controlNet installed on my PC twice for no reason when someone less ignorant might know how to point the two different frontends to the same models
ОтветитьPlease answer: how to SAVE RESULTS? I dont see any options for saving. Where is it?
ОтветитьWhy is it so slow when I generate it? It doesn’t update in real time like you demonstrated.
Ответитьthis is crazy! can you also apply trained data, like what corridors crew did to apply a specific art style?
ОтветитьHi there and thank u for this amazing video. It seems impossible to use the AI plugin. I went through all the steps and the lengthy download but what I get is an error in the Server configuration which reads "Invalid location: directory is not empty, but no previous installation was found". In the "Core Components" I see only Python as installed but not the other 3. Is there something I missed??
Ответить😵💫Awesome
ОтветитьThis is great !! thank you Nerdy Rodent:)) do you know if the images are saved anywhere, the in progress images it generates ?
ОтветитьFantastic integration. Is possible to have character and environment consistency when you move a skelethon node?
ОтветитьMaaaaaaaaaaaaaaaan, you're the best. Come doing the nerd shit you do, I'm your fan!
ОтветитьThank you so much.
Ответитьkrita says ipadaptor plugin is missing even if its installed... do any1 else have this problem or how to fix it?
Ответитьhow lauch comfy? which file should I run in folder ai_diffusion?
ОтветитьThis is awesome, stumbled across your channel as we've started to explore ComfyUI over 1111. Krista, Why didn't I know about this earlier? Desperate to move away from Photoshop and Gimp wasn't quite cutting it. If nothing else prompting SD in this way really helps you get a feel for the way the model works, I absolutely love watching the live picture update as I sketch, but also as I update my prompt, add commas etc etc. This is great, thanks for highlighting it.
ОтветитьYou never fail to amaze!
ОтветитьI can't get the generative image to show, everything is working but there is no generation image?
ОтветитьI couldn't get this to run in Google Colab. WAY too complicated; Colab isn't supported and I couldn't figure out how to install a third of the dependencies.
Krita wouldn't connect to my Tesla K80 GPU.
Still.., I'd like to try this. So I just broke down and bought an RTX 3060 on eBay for around $350 after taxes and shipping.
I thought I was going to be able to squeak by without having to sink a ton of money into this, but even the K80 ended up costing around $500, since I needed an upgraded mother board, CPU, case and power supply. -Not to mention a cooling solution. Most expensive $60 I've ever spent.
And so far, I'm not at all convinced that any of this will be able to do what I want: Take my pencils, and ink it like I would. The AI studio assistant has so far been an expensive pipe dream and HUGE time-waster.
Still.., on the up-side... When AI does eventually work as promised, I'll be ahead of the game. For about thirty seconds. When the smartphone app using masses are given easy access, art skills will be about as valuable as being good at lawn darts.
Hi all there. I downloaded the plugin and put it in the resources directory as the video said, BUT it doesn't show up in the python manager so i cant procceed. Did anyone have the same problem and how is it fixed. Thanx. And something for Nerdy Rodent i subscribed to the channel a long time ago but today youtbe showed that i wasnt...just a heads up
ОтветитьI have it running on 4GB VRAM (Linux) if you don't mind waiting 20 seconds instead of 2 seconds. People are just too damn impaitent these days. 😁 It's much better doing inpainting in Krita, rather than ComfyUI's mask editor.
ОтветитьFor those who have trouble adding a open pose character to a layer, make sure the layer is a vector layer.
ОтветитьCảm ơn video đã hướng dẫn tôi cặn kẽ, tôi đã có thể chạy được chương trình và bắt đầu vẽ, khá là vui vẻ.
ОтветитьThis is really good, I've been using krita extension in automatic1111 for 3 months and my workflow has improved but this one on comfyui is the next level. Do you know if I could use this extension not interfering with automatic1111?
ОтветитьThe best part is I can just bash together random images together, just put in some images I find on google (or generate it) and comp together a scene, and the live painting does the magic of making it all look seamless!
Ответитьyou are a born voice actor, just phenomenal, can you clone your amazing voice for us all poor speakers to have fun?
ОтветитьWow... this is a really amazing AI feature. To be able to upscale the art from literally line drawings to high resolution art in real time... it's really amazing. Thanks for showing this.
ОтветитьThank you my Nerdy Master for your time and be my guide!
ОтветитьThanks Mr Rodent. Finally i can use this properly (already had it installed) but i didn't know how to add the control layers etc.
Live diffusion ... where were we again a year ago? Insane progress. xD
When I try to connect to my Comfyui installation I get an error about missing Controlnet preprocessors, but I have them. In the client.log file I don't see any useful information.. any idea? :(
ОтветитьThanks Nerdy!! Always a please to watch!! 😊
Ответить👍👍👍
ОтветитьI discovered that if you apply a noise filter layer onto the drawing, the results will be fantastic! Using dry textured brushes also gives much better results. Just to add on, you can create a new image layer, fill it with white, then add a noise filter. Use the Opacity adjustment to control the amount of noise on the sketch and watch the wild results appear. Changing the models/checkpoint types also gives a wide variety of results.
ОтветитьI am already playing with this for the past week and this is a game changer for story board artists, concept designers and film pre-production! This will make existing workflows way way much more productive. Even film makers can now flesh out their story ideas quickly.
ОтветитьI would like to give this setup to a kid and see them go crazy.
ОтветитьThis looks super fun, cant wait to set it up. Thanks !
ОтветитьFor anyone without the hardware power to run locally there's a hugging face ui space with this real time tech
Ответитьokay this is ... alright, but ... can we change the prompt workflow? I'd like to add in my character as a ip adapter and use a different model etc... this feels difficult to use well, for drawing from scratch?
ОтветитьLCM has never been my "friend". What are your settings in Krita's Generator to get the highest speed & good question here? I keep getting garbage
Ответитьis it working with an AMD video card ? tks
ОтветитьThis is brilliant!! :D
I was sad before because a couple of Krita plugins for Stable Diffusion I tried in the past were discontinued... And now thanks to you I learn this one is working and even updated to use the LCM-Lora! I'm so happy! You are the best, Nerdy Rodent!! Thank you very much!! :D
i can get full img generation working, and outline selection generation, they work fine. but whenever I try the realtime drawing it just returns me the stick figure I drew in the first place, rather than a rendered image. I'm confused.
ОтветитьVery cool krita comfyui setup. I had been using a version by someone else but it doesn't have this cool real time stuff or built in ipadapters. Gonna have to check this one out. Thanks a bunch nerdy!
Ответитьnice explanation about this! thank you! do you know if it also works with the confy ui extension installed in automatic1111? or we need a separate confyui installation? it would be great to know for not tu fill to much the HDD!
ОтветитьI use Krita for all my professional work. This is great. Thanks for the video. Does anyone know where the control net models go? They work in Comfy, but the plugin says it can't find the controlnet models.
Ответитьnice
ОтветитьAre you going to cover stable video diffusion?
ОтветитьKrita is an amazing app by itself, with this it's out of this world darn!
Ответитьwow! that's tru-elly amazing'ing! an Owel ! lovely! I bet U could use nVidalia with this one!
& U'v even sorted the req!! what a pal!
Does this also work with pascal era gpus?
ОтветитьDo you know? Could this be done just in Comfy? That way I could set-up a camera, run the workflow through Touchdesigner which would mean I could draw the things on paper, instead of a computer.
Ответитьdo I need 15G GPU? or works with any paid server?
ОтветитьThanks Nerdy; plugin installed! 👍
Ответить