Fine-tuning Tiny LLM on Your Data | Sentiment Analysis with TinyLlama and LoRA on a Single GPU

Fine-tuning Tiny LLM on Your Data | Sentiment Analysis with TinyLlama and LoRA on a Single GPU

Venelin Valkov

54 года назад

15,303 Просмотров

Full text tutorial (requires MLExpert Pro): https://www.mlexpert.io/bootcamp/fine-tuning-tiny-llm-on-custom-dataset

Getting bad predictions from your Tiny LLM? Learn how to fine-tune a small LLM (e.g. Phi-2, TinyLlama) and (possibly) increase your model's performance. You'll understand how to set up a dataset, model, tokenizer, and LoRA adapter. We'll train the model (Tiny Llama) on a single GPU with custom data and evaluate the predictions.

AI Bootcamp (in preview): https://www.mlexpert.io/membership
Discord: https://discord.gg/UaNPxVD6tv
Subscribe: http://bit.ly/venelin-subscribe
GitHub repository: https://github.com/curiousily/Get-Things-Done-with-Prompt-Engineering-and-LangChain

00:00 - Intro
00:36 - Text tutorial on MLExpert
01:01 - Why fine-tuning Tiny LLM?
04:38 - Prepare the dataset
09:46 - Model & tokenizer setup
11:32 - Token counts
12:41 - Fine-tuning with LoRA
22:13 - Training results & saving the model
24:00 - Inference with the trained model
28:05 - Evaluation
30:46 - Conclusion

Join this channel to get access to the perks and support my work:
https://www.youtube.com/channel/UCoW_WzQNJVAjxo4osNAxd_g/join

#artificialintelligence #sentimentanalysis #llm #llama2 #chatgpt #gpt4 #python #chatbot

Тэги:

#Machine_Learning #Artificial_Intelligence #Data_Science #Deep_Learning
Ссылки и html тэги не поддерживаются


Комментарии: