Комментарии:
Did you do this fine tuning on CPU or GPU, can you provide details? Thanks
ОтветитьHow to control the % of params that are being trained? Where are we specifying this? Also can you pls tell me how to choose r? What are these r values: 2,4,8 etc?
ОтветитьEven though this was high level instruction, it was perfect. I can continue from here. Thanks Shahin jan!
Ответитьthis channel is going to hit 6 figure subscribers at this rate
Ответитьis there a way that distilbert or any other LLM can be trained for QA using dataset that has only text field without any label?
I'm trying to trian the LLM for QA but my dataset has only text field without any labels or questions and answers.
Excellent walkthrough
ОтветитьCan I use any open source LLM to train my, for example, healthcare dataset or the LLM should be the one which was pre-trained with healthcare dataset of my interest?
ОтветитьWhat is large language model firstly? Bert, deberta also consider as LLM?, when should we consider the model as LLM?, i have lot of confusion around it, could you clarify on this? generative model only consider as LLM , all the stuff behind the LLM training? @ShawhinTalebi
ОтветитьGreat video Shaw! It was a good balance between details and concepts. Very unusual to see this so well done. Thank you.
ОтветитьDidn't even watch this, it's already irrelevant with the new version(s) about to come out.
ОтветитьExcellent..... Thank you for sharing
ОтветитьThanks Shaw, very helpful.
ОтветитьSound (gain) a bit low but great vid bro!
ОтветитьSuch a great video ! Wondering how self supervised fine tuning works. Is there any video available on that ?
ОтветитьNicely done!
ОтветитьIt all depends on the selection of the much smaller r parameter, like in PCA!
ОтветитьSo nice video thank you soooo much!!❤
ОтветитьYou are the man! No BS, just good useful info
ОтветитьHonestly the most straightforward explanation I've ever watched. Super excellent work Shaw. Thank you. It's so rare to find good communicators like you!
Ответитьthanks!
Ответитьthanks
ОтветитьA very clear and straightforward video explaining finetuning.
ОтветитьNice Video. I need your help to clarify my doubt. When we do the PEFT based finetuning, the final finetuned model size (in KBs/GBs) will increase by the additional parameters ( base model size + additional parameters size) . In this case base model size will be lesser and final finetuned model size will be more. Deploying the final finetuned model in the edge devices will be more difficult because of the limited edge device resources. Are there any way adapters / LoRA can help in reducing the final finetuned model memory size so that easily we can deploy the final model in the edge devices? Your insights will be helpful. Currently i am working in the vision foundation model deployment in the edge device where i am finding it difficult to deploy because of vision foundation model memory size and inference speed.
ОтветитьHello! I'm trying to use a similar approach but for a different task. Given a paragraph, I want my model to be able to generate a set of tags associated with it for a specific use case. Not quite sure how the Auto Model would differ here and would love your thoughts on this!
ОтветитьBest video on llm fine tuning. Very concise and informative.
Ответитьthank you so much
ОтветитьWhat to do when the content is web based. The QnA chatbot has to answer the question based on the content present in the given website.
We are using llama2 7b and it is not giving accurate answers to the questions asked, the answers has to be from the website, but sometimes it gives additional information that is not part of the website.
How would we fine tune and train, use RAG or what are the different API that can be called or trained API's.
It would be helpful if you can share some suggestions or links where i can find the information
Very clear, thanks!
ОтветитьCan we able to get weights of a Llama model?
ОтветитьThis was one of the best videos on this topic, really nice man, keep going.
ОтветитьWow dude, just you wait, this channel is gonna go viral! You explain everything so clearly, wish you led the courses at my university.
ОтветитьThis is incredible, thank you for the clear tutorial. Please subscribe to this channel. One question: Can we apply LoRA to finetune models used in image classification or any computer vision problems? Links to read or a short tutorial would be helpful.
Ответитьamazing video, very well explained
ОтветитьUnderstood. The codes were very helpful. They were not constantly scrolling and panning. But please display the full code and mention the Python version and system configuration, including folders, etc.
ОтветитьHi, Nice tutorial. I have a question. Is it possible to have more than 1 output in a supervised way? For example: {"input": "ddddddd", "output1":"dddd","eeee", "ffffff", "output2": "xxxx", "zzzzz", etc} Thx
ОтветитьShaw, terrific job explaining very complicated ideas in an approachable way! One question - are there downsides to combining some of the approaches you mentioned, say, prompt engineering + fine-tuning + RAG to optimize output...how would that compare to using one of the larger OOTB LLMs with hundreds of billions of params?
ОтветитьHI Shaw, amazing video - very nicely explained! Would be great if you could also do a video (with code examples) for Retrieval Augmented Generation as an alternative to fine-tuning :)
Ответитьbert is not a large language model
ОтветитьThanks
Ответитьi wonder why there is no GUI for this task?
ОтветитьHey dude nice video. I think I'll try to find tuned Lamma to detect phrases and subsequently classify tweets - but multiclass classification. Hope it works ,I guess I'll transfer the csv to the prompt you mentioned like alpaca was done and see if it works
ОтветитьGreat video, Shawhin!
ОтветитьIs there any limitation to the GPU memory? I am just a student with only a 3050 GPU with only 4GB memory
ОтветитьThank you sooo much❤
Ответитьare you sure this is a LLM?
ОтветитьAmazing video Shawhin. It was quite easy to follow and stuff were clearly explained. Thank you so much,
ОтветитьRandom question i how do you edit you audio clips together to make them so seamless because idk where to mate them. And great video by the way 👍
ОтветитьCan you recommend any course where i can learn to build llm from scratch and fine-tune in depth
Ответитьnice video, thanks😁
ОтветитьVery good video and explanation!
Ответить