Ajay Kommineni’s Post

View profile for Ajay Kommineni, graphic

Building Gen AI Solutions | ML Research | Open source Contributor

New Project Update: Fine Tuning 2B Instruct Gemma model on Indian History Dataset Using PEFT Hardware Used : Free Single P100 GPU by Kaggle The process is lightning-fast, taking less than 30 minutes depending on the dataset's size. This project showcases the potential of Small language models and PEFT methods. With readily available Free GPUs online, there's immense future scope for fine-tuning domain-specific data in minutes. https://lnkd.in/dP9tzWDk

GitHub - AjayK47/Gemma-Model-Finetuning-Using-Lora: Fine tuning Domain Specific dataset (Personal Dataset) on Gemma 2B Model

GitHub - AjayK47/Gemma-Model-Finetuning-Using-Lora: Fine tuning Domain Specific dataset (Personal Dataset) on Gemma 2B Model

github.com

What about the accuracy of the Gemma 2B fine-tuned model?

Like
Reply

To view or add a comment, sign in

Explore topics