FREE Local Image Gen on Apple Silicon | FAST!

FREE Local Image Gen on Apple Silicon | FAST!

Alex Ziskind

3 недели назад

37,267 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@jakeave
@jakeave - 22.05.2024 19:11

Thank you!!! The free models look like Dalle from last year, so maybe next year the free models will look like Dalle this year. I've spent way too much free time messing with ollama and open web ui because of your last video. Could you look into the RAG and the web search features? I've never gotten web stuff to work, but I feel like RAG documents do work, but I haven't been too successful messing with it. There's not a lot of content on it, but it seems like a perfect way to put my existing repo or repos so that the model can pick up on conventions and context.

Ответить
@derter2007
@derter2007 - 22.05.2024 19:12

boom!

Ответить
@everlasts
@everlasts - 22.05.2024 19:24

Great video!! how about local Text-To-Speech for local webUI also? combine with image recognition then we will have chatGPT-o local version :) Thanks!

Ответить
@DD-fl3qk
@DD-fl3qk - 22.05.2024 19:31

Use forge instead, it's much faster then A1111

Ответить
@aeonlancer
@aeonlancer - 22.05.2024 19:38

The last boom! was not enough

Ответить
@originalmagneto
@originalmagneto - 22.05.2024 19:42

can’t wait for custom MLX models to show up 😉

Ответить
@alliwene
@alliwene - 22.05.2024 19:42

I'm running openwebui in a docker container and it can not access localhost, how can I go about it?

Ответить
@mad_circuits
@mad_circuits - 22.05.2024 19:51

You meant a photo of a "Lama" not a "llama". 😂

Ответить
@abduislam23
@abduislam23 - 22.05.2024 19:54

Really nice
Can I run this from external storage?

Ответить
@GetzAI
@GetzAI - 22.05.2024 19:54

Thanks Alex.
What are your thoughts on M4 and how it will speed up inference when released to Macs?

Ответить
@DonaldLivingston
@DonaldLivingston - 22.05.2024 19:54

This is some cool stuff. Keep it coming!

Ответить
@MiltonSun
@MiltonSun - 22.05.2024 19:58

I do have source code of product (bash script, python, c++), could you use llama read the source code and help troubleshooting problem on log files?

Ответить
@OoZe1911
@OoZe1911 - 22.05.2024 20:04

Thank you! Awesome tutorial! I think that maybe Fooocus is easier to install and use (and it works on Intel Mac with Amd GPU)

Ответить
@ArkZ3R0
@ArkZ3R0 - 22.05.2024 20:13

Thank you for the videso, would this work on an intel based MAC?

Ответить
@bloombird807
@bloombird807 - 22.05.2024 20:23

This is awesome!! Thank you for sharing this 🤯

Ответить
@sampillai2463
@sampillai2463 - 22.05.2024 20:33

Can we run this model on m3 pro base variant Alex????

Ответить
@Tigerex966
@Tigerex966 - 22.05.2024 20:40

Thanks Microsoft😊

Ответить
@BeluCatBMG
@BeluCatBMG - 22.05.2024 21:37

I recommend DiffusionBee, it has a neat UI and you dont need to run all this scripts to install it. It's not that configurable, but if you need download-and-run solution, it's will be perfect

Ответить
@Doogiej84
@Doogiej84 - 22.05.2024 21:55

Do a videos with TTS please

Ответить
@sweealamak628
@sweealamak628 - 22.05.2024 22:00

May 2024 has been a watershed month. For the first time in my life, I really felt I've fallen behind. M4, Gemini 1.5, ChatGPT4o, Copilot+PC with Snapdragon X Elite. So much tech that my M1 MBA 8GB RAM will fail to take advantage of and fail to compete with. All this including local LLMs that are too resource intensive to even try out. My next laptop could well be a Windows PC if Apple doesn't address the RAM situation in their base models.

Ответить
@jake-ep9wq
@jake-ep9wq - 22.05.2024 22:33

Thank you so much for these videos. They’re perfect for me as a new Mac user with little knowledge of the terminal commands.

Ответить
@marcelo_anselmo_levy
@marcelo_anselmo_levy - 22.05.2024 22:51

Whta about Fooocus project?

Ответить
@trewgas
@trewgas - 22.05.2024 23:04

Insane how underrated your channel is. Kinda like it that way tbh... But seriously, I love you Alex.

Ответить
@lowkeygaming4716
@lowkeygaming4716 - 22.05.2024 23:12

Now I'm curious. I'll try it in my m1 air hopefully it won't toast my machine 😂

Ответить
@ontime8109
@ontime8109 - 22.05.2024 23:16

I guess...it really whips the lama's ass

Ответить
@Dominik-K
@Dominik-K - 23.05.2024 00:01

This is really useful. Thanks a bunch for the gotchas and tutorial

Ответить
@andreaslassak2111
@andreaslassak2111 - 23.05.2024 03:02

It can also run on Linux :) and Windows 10/11 .. let see how it runs on Win11 ARM :)

Ответить
@natieklopper
@natieklopper - 23.05.2024 03:32

BTW A1111 already creates a conda environment when running anyway

Ответить
@bacult1
@bacult1 - 23.05.2024 04:53

Hi Alex. What do you think is good for newb IT...MacBook Air M3 24GB 1 TB or MacBook Pro M3 18GB 1 TB (15" and 14")... (coding, parallels, adobe -all programs, ...maybe machine learning) Thanks, Spasibo.

Ответить
@Gome.o
@Gome.o - 23.05.2024 07:10

Or you could just install something like Mochi Diffusion or Guernika?

Ответить
@HunterHU
@HunterHU - 23.05.2024 07:22

what about Lora?

Ответить
@noahleaman
@noahleaman - 23.05.2024 07:22

An overview of Pinokio would make for a good video

Ответить
@faysal1991
@faysal1991 - 23.05.2024 08:11

amazing stuff

Ответить
@tinkerman1790
@tinkerman1790 - 23.05.2024 09:30

I really enjoy watching your video. It is informative in a vibe of fun. Thx for your effort!

Ответить
@luc122c
@luc122c - 23.05.2024 14:18

The pace of this video is brilliant. Quick but with all the relevant information.

Ответить
@Ginto_O
@Ginto_O - 23.05.2024 14:58

Bro shows us some 1.5 models like its 2022 💀

Ответить
@kekincai
@kekincai - 23.05.2024 17:32

great tutorial,thanks

Ответить
@hithot2008
@hithot2008 - 23.05.2024 20:22

I don‘t think Arnold Schwarzenegger is an animal.😅😅😅

Ответить
@TheWWWyrm
@TheWWWyrm - 23.05.2024 21:55

I followed this, but after trying the UI provided by stable-diffusion-webui, I found it better. You can give actual prompts to it, instead of relying on Llama3 promts, and it results in much better images. But for more casual use I think going though Llama3 is better(you want a cat, you get a cat).

Ответить
@suraleisme
@suraleisme - 24.05.2024 00:37

thanks for videos! can you do one about the music/samples generation?

Ответить
@RomPereira
@RomPereira - 24.05.2024 02:37

I never told you, but I met an austrian cousin from Mr. Schwarzenegger. Really skinny guy and short. I guess Mr. Schwarzenegger had good bones.

Ответить
@hubby_medical5454
@hubby_medical5454 - 24.05.2024 05:49

Im a former data scientist (doing the whole medical school thing now), and this channel gives me the Data Science Developer Fix I need sometimes. Thank you for your content. Us Tech nerds love you more than we can comment. now back to studying lol

Ответить
@RichWithTech
@RichWithTech - 24.05.2024 08:20

Brilliant, Can you do a windows one?

Ответить
@Raj-kd5ly
@Raj-kd5ly - 24.05.2024 11:06

I love to watch your videos! It's so fun and at the same time informative as well.

Ответить
@carloseduardoalmeida6469
@carloseduardoalmeida6469 - 25.05.2024 06:55

Amazing content. Loving these tutorials!

Ответить
@yorkan213swd6
@yorkan213swd6 - 25.05.2024 11:47

Why not NPU ?

Ответить
@lambfbbd
@lambfbbd - 26.05.2024 16:42

Hi, Alex, Thankyou you teach us so funny. I like it. But it would be better if you make it as docker image, or teach to make it as docker image, because I dont want to make chaos with different environments of python.

Ответить
@benjaminmuller5261
@benjaminmuller5261 - 05.06.2024 17:36

Is there a way to use the image AIs without all the frontend overhead? :)

Ответить
@jasonreviews
@jasonreviews - 09.06.2024 09:26

i used foocus instead

Ответить
@jahanormatov2642
@jahanormatov2642 - 16.06.2024 00:05

Awesome guide, thanks a lot Alex! Tried using A1111 last month or so, but today learned ComfyUI is also supported by Apple Silicon. Turns out, it's more optimised and much faster! Does not use too much RAM compared to A1111, and ComfyUI can even run models that were crashing on A1111 (on GPU poor 8GB base Mac). The setup and usage is slightly more advanced but not by much, and a guide from you would be appreciated!

Ответить