Getting bad predictions from your Tiny LLM? Learn how to fine-tune a small LLM (e.g. Phi-2, TinyLlama) and (possibly) increase your model's performance. You'll understand how to set up a dataset, model, tokenizer, and LoRA adapter. We'll train the model (Tiny Llama) on a single GPU with custom data and evaluate the predictions.