Finetuning coding LLM OpenCodeInterpreter-DS-6.7B for Text-to-SQL Code Generation on a Single A100 GPU in PyTorch.
-
Updated
Jun 6, 2024 - Jupyter Notebook
Finetuning coding LLM OpenCodeInterpreter-DS-6.7B for Text-to-SQL Code Generation on a Single A100 GPU in PyTorch.
LOLA_ LLM-Assisted Online Learning Algorithm for Content Experiments
The LARGE LANGUAGE MODEL FOR HYDROGEN STORAGE project uses advanced natural language processing to improve research efficiency. It offers concise summaries and answers questions about hydrogen storage research papers, helping users quickly understand key insights and latest advancements.
A open-source framework designed to adapt pre-trained Language Models (LLMs), such as Llama, Mistral, and Mixtral, to a wide array of domains and languages.
(In-progress) Finetuning OpenAI's GPT-3.5-Turbo as a base model on open-source data about the Tampa Bay region to create a chatbot specializing in information on the area!
a friendly neighborhood repository with diverse experiments and adventures in the world of LLMs
End to End Generative AI Industry Projects on LLM Models with Deployment
This repository showcases Python scripts demonstrating interactions with various models using the LangChain library. From fine-tuning to custom runnables, explore examples with Gemini, Hugging Face, and Mistral AI models.
qwen-1.5-1.8B sentiment analysis with prompt optimization and qlora fine-tuning
This repository contains code for fine-tuning the LLama3 8b model using Alpaca prompts to generate Java codes. The code is based on a Google Colab notebook.
Experiments with the Meta-Llama-3-8B
The repository contains the code that is used to create the instruct style dataset of telugu news articles.
Fine tune Phi 2 for persona grounded chat
Code Wizard is a coding companion/ code generation tool empowered by CodeLLama-v2-34B AI to automatically generate and enhance code based on best practices found in your GitHub repository.
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
Jupyter notebooks from "Finetune LLMs" course at deeplearning.ai
User-friendly WebUI for Finetuning of LLMs
This Repo contains How to Finetune Google's New Gemma LLm model using your custom instuction dataset. I have finetuned Gemma 2b instuct Model on 20k medium articles data for 5hrs on kaggle p100 GPU
Add a description, image, and links to the finetuning-llms topic page so that developers can more easily learn about it.
To associate your repository with the finetuning-llms topic, visit your repo's landing page and select "manage topics."