Комментарии:
Thank you for this awesome course, I‘m enjoying it!
ОтветитьVery nice! Thank you.
Ответитьthis is amazing, super clear, thank you!
ОтветитьLove this, I think we’ve all been doing just-in-time learning to run and keep up to date with what’s happening every couple of weeks. Great to tear it back to the foundations Matt
ОтветитьRemoving models is the most annoying part because you have to name it exact. Wish they made it easier to just select and delete via GUI or list and select to remove by a number
Ответитьah... the ollsma serve... LOL i was wasted a week until i realized it was user issue in Linux, i felt so stupid having duplicate models, and things... this is really good video any one new to ollama should watch this if i watch this before i wouldn't waste a week just to realized how stupid i am the simple user issue...
ОтветитьMore cool stuff please!
ОтветитьI wouldn’t recommend creating models the legacy Q4_0 quant types, they’re depreciated and are worse quality than K quants (or IQ if you’re running with CUDA)
ОтветитьHi Matt, thank you for another amazing content.
I'm working with ollama and other tools available from community to develop some solutions for my company.
I need some help from a professional consultant for this job.
Could you work with me, or, maybe, recommend a person who can help me to do it?
I love the way you explain. Thanks
ОтветитьExcellent content Matt! Congrats! Keep on going.
Ответитьany1 wana swap code for tokens?
Ответитьwhat location to run that download hugging face model command? and where does it download to? same location as the others wheres that?
ОтветитьI'm really enjoying this series. Thanks.
Ответить🙏🙏🙏Please how to add vision model MiniCPM-V 2.6 to ollama?
openbmb/MiniCPM-V-2_6-gguf
Thank you!
ОтветитьI love your videos! Your explanations are amazing, thank you!
ОтветитьWonderful video, Matt. Thanks so much for sharing this.
ОтветитьMan. I love it. I already subscribe to it. Something I really will be crazy to know is how to story my ollama local models in an external hard drive in Mac. As you know macs doesn't have much space. So, i bought a special hard drive that runs at 40G/sec to have models and other stuff and I will love to have the models in there than in my internal hard drive. Thanks for the great content and explanations.
ОтветитьPlease share the link of the video for reducing the model size for specific tasks, example, only weather, is wouldn't need the hole context for this
ОтветитьI have a noob question. If anybody can upload a model on Ollama, is it possible for a malicious user to upload malware disguised as a model? And are there measures to prevent such a scenario.
ОтветитьLooking forward for your next video
ОтветитьExactly what I was looking for! THANK YOU!
Ответитьthanks, m :)
ОтветитьI’m trying to use ollama serve to integrate my app with ollama however a lot of functionality is not working when using ollama serve for example I can list pull and rm models using serve but when I try to load a model to memory via API or just run model in terminal it crashes base on the log it just says ollama crashes and restarted, for now I’m using the ollama app.exe to start the full application to unblock development but I can’t find any documentation to help troubleshoot the issue, I would really appreciate if you can have a video just about using ollama serve command and it’s how you’d use it
Ответитьfor those using mac, type nano modelfile in the command line to create the modelfile
ОтветитьThanks
Ответить