Комментарии:
Thank you!!! The free models look like Dalle from last year, so maybe next year the free models will look like Dalle this year. I've spent way too much free time messing with ollama and open web ui because of your last video. Could you look into the RAG and the web search features? I've never gotten web stuff to work, but I feel like RAG documents do work, but I haven't been too successful messing with it. There's not a lot of content on it, but it seems like a perfect way to put my existing repo or repos so that the model can pick up on conventions and context.
Ответитьboom!
ОтветитьGreat video!! how about local Text-To-Speech for local webUI also? combine with image recognition then we will have chatGPT-o local version :) Thanks!
ОтветитьUse forge instead, it's much faster then A1111
ОтветитьThe last boom! was not enough
Ответитьcan’t wait for custom MLX models to show up 😉
ОтветитьI'm running openwebui in a docker container and it can not access localhost, how can I go about it?
ОтветитьYou meant a photo of a "Lama" not a "llama". 😂
ОтветитьReally nice
Can I run this from external storage?
Thanks Alex.
What are your thoughts on M4 and how it will speed up inference when released to Macs?
This is some cool stuff. Keep it coming!
ОтветитьI do have source code of product (bash script, python, c++), could you use llama read the source code and help troubleshooting problem on log files?
ОтветитьThank you! Awesome tutorial! I think that maybe Fooocus is easier to install and use (and it works on Intel Mac with Amd GPU)
ОтветитьThank you for the videso, would this work on an intel based MAC?
ОтветитьThis is awesome!! Thank you for sharing this 🤯
ОтветитьCan we run this model on m3 pro base variant Alex????
ОтветитьThanks Microsoft😊
ОтветитьI recommend DiffusionBee, it has a neat UI and you dont need to run all this scripts to install it. It's not that configurable, but if you need download-and-run solution, it's will be perfect
ОтветитьDo a videos with TTS please
ОтветитьMay 2024 has been a watershed month. For the first time in my life, I really felt I've fallen behind. M4, Gemini 1.5, ChatGPT4o, Copilot+PC with Snapdragon X Elite. So much tech that my M1 MBA 8GB RAM will fail to take advantage of and fail to compete with. All this including local LLMs that are too resource intensive to even try out. My next laptop could well be a Windows PC if Apple doesn't address the RAM situation in their base models.
ОтветитьThank you so much for these videos. They’re perfect for me as a new Mac user with little knowledge of the terminal commands.
ОтветитьWhta about Fooocus project?
ОтветитьInsane how underrated your channel is. Kinda like it that way tbh... But seriously, I love you Alex.
ОтветитьNow I'm curious. I'll try it in my m1 air hopefully it won't toast my machine 😂
ОтветитьI guess...it really whips the lama's ass
ОтветитьThis is really useful. Thanks a bunch for the gotchas and tutorial
ОтветитьIt can also run on Linux :) and Windows 10/11 .. let see how it runs on Win11 ARM :)
ОтветитьBTW A1111 already creates a conda environment when running anyway
ОтветитьHi Alex. What do you think is good for newb IT...MacBook Air M3 24GB 1 TB or MacBook Pro M3 18GB 1 TB (15" and 14")... (coding, parallels, adobe -all programs, ...maybe machine learning) Thanks, Spasibo.
ОтветитьOr you could just install something like Mochi Diffusion or Guernika?
Ответитьwhat about Lora?
ОтветитьAn overview of Pinokio would make for a good video
Ответитьamazing stuff
ОтветитьI really enjoy watching your video. It is informative in a vibe of fun. Thx for your effort!
ОтветитьThe pace of this video is brilliant. Quick but with all the relevant information.
ОтветитьBro shows us some 1.5 models like its 2022 💀
Ответитьgreat tutorial,thanks
ОтветитьI don‘t think Arnold Schwarzenegger is an animal.😅😅😅
ОтветитьI followed this, but after trying the UI provided by stable-diffusion-webui, I found it better. You can give actual prompts to it, instead of relying on Llama3 promts, and it results in much better images. But for more casual use I think going though Llama3 is better(you want a cat, you get a cat).
Ответитьthanks for videos! can you do one about the music/samples generation?
ОтветитьI never told you, but I met an austrian cousin from Mr. Schwarzenegger. Really skinny guy and short. I guess Mr. Schwarzenegger had good bones.
ОтветитьIm a former data scientist (doing the whole medical school thing now), and this channel gives me the Data Science Developer Fix I need sometimes. Thank you for your content. Us Tech nerds love you more than we can comment. now back to studying lol
ОтветитьBrilliant, Can you do a windows one?
ОтветитьI love to watch your videos! It's so fun and at the same time informative as well.
ОтветитьAmazing content. Loving these tutorials!
ОтветитьWhy not NPU ?
ОтветитьHi, Alex, Thankyou you teach us so funny. I like it. But it would be better if you make it as docker image, or teach to make it as docker image, because I dont want to make chaos with different environments of python.
ОтветитьIs there a way to use the image AIs without all the frontend overhead? :)
Ответитьi used foocus instead
ОтветитьAwesome guide, thanks a lot Alex! Tried using A1111 last month or so, but today learned ComfyUI is also supported by Apple Silicon. Turns out, it's more optimised and much faster! Does not use too much RAM compared to A1111, and ComfyUI can even run models that were crashing on A1111 (on GPU poor 8GB base Mac). The setup and usage is slightly more advanced but not by much, and a guide from you would be appreciated!
Ответить