Is Open Webui The Ultimate Ollama Frontend Choice?

Is Open Webui The Ultimate Ollama Frontend Choice?

Matt Williams

6 месяцев назад

100,852 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@Megamannen
@Megamannen - 19.10.2024 22:25

WEBUI_AUTH=false

Ответить
@Rohambili
@Rohambili - 16.10.2024 13:05

Look into openwebui page models... Those things can help to cutomize more the llm`s --- creating more accurate answers ... while using...

Ответить
@sridhartn83
@sridhartn83 - 14.10.2024 14:31

I am getting- WebUI could not connect to OllamaTried - almost everything , deleted docker image container, reinstalled but till same error even tried on VM on different distro still same error.

Ответить
@HUEHUEUHEPony
@HUEHUEUHEPony - 09.10.2024 07:00

no code interpreter?

Ответить
@AFiB1999
@AFiB1999 - 05.10.2024 01:47

Great content. I wish updating the Open webUI was easy for update directly on the web interface

Ответить
@amventures1
@amventures1 - 02.10.2024 00:56

How can I add Speech to text capability to my model while using React? Can I copy web gui?

Ответить
@amventures1
@amventures1 - 02.10.2024 00:55

Can it do LoRA?

Ответить
@MegaEyeTV
@MegaEyeTV - 29.09.2024 20:23

I watched your video about Msty. I installed it and never looked back.

Ответить
@ZeeQueIT
@ZeeQueIT - 29.09.2024 13:06

Msty is better i think 🤔

Ответить
@wontmk
@wontmk - 28.09.2024 18:01

MY EYES!!!!!!! Dear god, dark mode please! .....ouch
┻━┻ ︵ \( °□° )/ ︵ ┻━┻

Ответить
@cwykidd
@cwykidd - 27.09.2024 10:31

I can appreciate your sense of humor. i just wish i was smart enough to get your goat. this is a case that it doesn't matter how early i get up in the morning. Your dedication and and willingness to teach is above board. the fact that you have a intuitive ability to do it as well as you is beyond irritating. you've made me require a different excuse for why i'm so damn slow at grasping this stuff. it's not like i wasn't there when my buddy up graded to the apple II then the mac. there were no apps or software. if you wanted your machine to do somthing you had to tell it. wow i'm not quitting . ah i got it. even though im genx im on the cusp of being a boomer.

Ответить
@ricardofranco4114
@ricardofranco4114 - 24.09.2024 07:43

Can Llama AI tell me how to cheat in video games ? Cuh google ai wont.
Also, can llama AI draw photos ?

Ответить
@swarupdas8043
@swarupdas8043 - 24.09.2024 01:38

As of now there are no ways to use your own vector DB or define the own embedding process to create a document
How to work with AzureChatOpenAI?
I have my own functions which uses a custom Vector DB and generates answers. How to use that in OpenWebUI?
Can I use Azure AD instead of RBAC?

Ответить
@pepperoni-pizza2457
@pepperoni-pizza2457 - 11.09.2024 19:55

I tried to install it but it's so much of a hassle. There on it's github page it says "Effortless setup" but I don't see how having to install kubernetes or docker just to install this UI is effortless. I just want a windows installer and that I'd call effortless. But because I use the AI for text prompts only I prefer a thing called cmder which is a better version of a cmd and then I export all the output into a txt or any document I want.

Ответить
@AmanKumar-r5r2b
@AmanKumar-r5r2b - 03.09.2024 08:06

s there a way I can add parsing in open web ui, I want to parse the documents locally before sending it to local ollama models

Ответить
@TheRealTannerThoman
@TheRealTannerThoman - 22.08.2024 09:33

Excellent review.
Your voice and mannerisms were made for this.

Ответить
@robwin0072
@robwin0072 - 15.08.2024 19:07

Good day,

Matt, hopefully, this is my last question about the Private GPT installation. My laptop has arrived.

I have installed an M.2 2T primary drive and a secondary 2T SSD.

Q: After installing Ollama, Docker, and WebUI, can the models be stored (directed) to the secondary SSD to preserve space on the primary M.2 system SSD?

If so, when do I pick where to store the models during their installation?

Ответить
@lakergreat1
@lakergreat1 - 07.08.2024 21:00

would love to see a follow up detailed update review. love the level of detail in this one

Ответить
@andrewzhao7769
@andrewzhao7769 - 06.08.2024 21:05

Thank you very much, this is super helpful

Ответить
@ukaszkonieczny8909
@ukaszkonieczny8909 - 05.08.2024 22:24

Hi : ) Thanks for nice review. Like your voice : ) Btw. there is no need to create account. During container creation it is possible to add environment variable -e WEBUI_AUTH=False (it is in doc)

Ответить
@loco-herzog7047
@loco-herzog7047 - 05.08.2024 01:08

I can connect my downloaded Models from ollama. They don’t show up on the WebUI.

Ответить
@docbrian2573
@docbrian2573 - 02.08.2024 08:45

Very clear presentation; thank you!

Ответить
@CrazyTechy
@CrazyTechy - 01.08.2024 18:30

Matt, I just installed Ollama and started using llama3.1 via the Windows cmd prompt. Now I need to install Webui and I followed you until you mentioned docker. You lost me after that. I need the procedure for installing Webui, and I assume I need docker. However you go through lots of details that seem important, but you don’t follow through for me. But what I’m saying is I need just more direct instructions for getting Winui to work without using the cmd prompt window. Thanks for your informative series. Howard from Detroit.

Ответить
@AdrienSales
@AdrienSales - 27.07.2024 09:00

Hi Matt, as always, your demos just make things so clear : would you plan a demo about "custom tools" integrations ?... and add a cutom one ?

Ответить
@LuisYax
@LuisYax - 27.07.2024 04:21

Have you looked at anythingLLM yet? That's another UI tool that has a lot more functionality then open-webui. I'm still using open-webui but a lot less.

Ответить
@RomPereira
@RomPereira - 24.07.2024 18:28

@technovangelist thank you for your video. You mentioned a chart, do you mind sharing it?

Ответить
@adrianhermozabayona9888
@adrianhermozabayona9888 - 22.07.2024 19:13

The new GPT4ALL seems promising, but I don't like the fact it does not have a way to create models or customizing more, in addition. It has some issues while referencing, there is also one more tool which is LM-STUDIO.

Ответить
@DihelsonMendonca
@DihelsonMendonca - 22.07.2024 16:39

I love Open WebUI. I can download a GGUF model in hugging face and convert directly into Ollama format in minutes using the GUI. And TTS is fantastic, hands free, I can talk and listen. I even installed new voices. And I can web search, RAG, many features indeed ! ❤❤❤

Ответить
@bic4
@bic4 - 20.07.2024 16:16

open webui came a long way in the last 2 months

Ответить
@AbhinavKumar-tx5er
@AbhinavKumar-tx5er - 17.07.2024 19:20

A very good explanation 🙂 but how will you integrate this tool with your code by integrating ollama api to the web api? if so then how?

Ответить
@MilesBellas
@MilesBellas - 16.07.2024 17:09

What is the best way to Integrate with Comfyui and Stable Diffusion?

Ответить
@x7A9cF2k
@x7A9cF2k - 12.07.2024 10:54

Hi Matt,

how much ram do you have in your machine?
I have 16 gb ram, I could only download a 3GB Size of model with docker

Ответить
@tohur
@tohur - 10.07.2024 08:37

If you don't want to use docker just use podman.. its the same command just podman instead of docker

Ответить
@iresolvers
@iresolvers - 07.07.2024 03:04

what pain to get working with docker desktop and GPU support

Ответить
@blee6782
@blee6782 - 05.07.2024 04:12

I installed it and it seems useful. There's a call feature now, but I haven't gotten it to work yet. MIght be a killer feature when I'm using my own llm from a mobile device. I use wireguard to connect to my home instance when I'm away.

Ответить
@DihelsonMendonca
@DihelsonMendonca - 05.07.2024 00:17

I can't use light mode anymore. I got a terrible illness on my eyes, on the retinas, which I can't read almost anything on a white bg. It simply hurts my eyes. I still can see white over dark, but I need lots of contrast. 😮

Ответить
@robwin0072
@robwin0072 - 30.06.2024 14:40

I Liked and Subscribed.

Hello Mat, I was one of the first group of STS Space Shuttle programmers 40 years ago while still in my early days of college. It's great to see how programmers of far ago and today's brains use the same synapse pathways.

I have been with my hidratespark pro 32oz for three weeks now - love it. I plan to buy a small one (16oz?) to fit into my vehicle.

2. Which do you recommend, anaconda or Docker?

3. And what are we to do with the Modelfiles section?

4. What controls compare to Openai’s Custom Instructions in open web UI?

5. The ‘/’ features appear pretty helpful - I have to rewatch your explanation.

6. where to find user manual instructions for all of the open web UI features and how-tos.

Thank you for the video.
Happy Hidration.

Ответить
@lunarrevel8754
@lunarrevel8754 - 29.06.2024 03:33

Great video thanks!

Ответить
@greath2325
@greath2325 - 27.06.2024 19:03

First meet today. Subsribe now.

Ответить
@bazwa6a
@bazwa6a - 24.06.2024 19:20

your content is very hight quality... thanks Matt

Ответить
@utuberay007
@utuberay007 - 23.06.2024 04:33

How do I connect azure OpenAI embedding ?

Ответить
@mahakleung6992
@mahakleung6992 - 23.06.2024 03:56

I like OOGABOOGA for chat, AnythingLLM+OLLAMA for RAG. But thank you. It was interesting and you present very well.

Ответить
@DiegoCrescenti70
@DiegoCrescenti70 - 22.06.2024 12:23

Thx for video. A question about the combo ollama/openwebui/docker. I’ve this configuration and all is ok. I’ve a goal to raise. I want to specialize a LLM pre-trained. Training it with a large base of data about coding in a proprietary language not popular. Only 2 hundreds programmers use it.
My question is:
- which generic and light LLM can i use?
- i use some Python script to train a LLM (in my case test with Phi3:Mini). I found a problem to solve. When i try to load the model something wrong. Infact Python says that not find the path of the model, usually ~./ollama/models/…
- I note LLM are encrypted SHA256! Peraphs is a problem!

Can you help me to do this training?

Can you give some documentation or tutorials?

Thx in advance. Have a good day.

Sorry for my bad english. I’m an italian developer.

Ответить
@LabiaLicker
@LabiaLicker - 21.06.2024 06:05

Looks excellent for self hosting a LLM for friends and family to use. Instead of continuing to feed user prompts to the beast (openai).

Ответить
@JavArButt
@JavArButt - 09.06.2024 02:43

dark mode should die? oh nooo, a terrible thing to say

Ответить
@PMProut
@PMProut - 08.06.2024 00:49

I got addicted to ollama last year and got to play around with openwebui when it was still called ollama webui
The name change messed up my docker installs, not gonna lie
But then, we decided to try it as a corporate AI companion, but as it was a testing phase, we didn't scale our cloud very high, so it was pretty slow
On my machine though, I wanted to try and use every bit of feature, which led me to install and learn ComfyUI, and while the image generations options from openwebui is limited whatever the backend you use, it's still useable

Ответить
@rayfellers
@rayfellers - 04.06.2024 14:04

I'm in agreement with Matt about dark mode. Hard to read. That's why I use discord as little as possible.

Ответить
@tardigr8
@tardigr8 - 03.06.2024 14:20

keep getting this after signup : Account Activation Pending
Contact Admin for WebUI Access
Your account status is currently pending activation. To access the WebUI, please reach out to the administrator. Admins can manage user statuses from the Admin Panel...

Ответить