You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project uses BERT to build a QA system fine-tuned on the SQuAD dataset, improving the accuracy and efficiency of question-answering tasks. We address challenges in contextual understanding and ambiguity handling to enhance user experience and system performance.
This repository contains code for fine-tuning the LLama3 8b model using Alpaca prompts to generate Java codes. The code is based on a Google Colab notebook.
(In-progress) Finetuning OpenAI's GPT-3.5-Turbo as a base model on open-source data about the Tampa Bay region to create a chatbot specializing in information on the area!
This project enhances the LLaMA-2 model using Quantized Low-Rank Adaptation (QLoRA) and other parameter-efficient fine-tuning techniques to optimize its performance for specific NLP tasks. The improved model is demonstrated through a Streamlit application, showcasing its capabilities in real-time interactive settings.
Unlock the potential of finetuning Large Language Models (LLMs). Learn from industry expert, and discover when to apply finetuning, data preparation techniques, and how to effectively train and evaluate LLMs.