a friendly neighborhood repository with diverse experiments and adventures in the world of LLMs
-
Updated
May 12, 2024 - Jupyter Notebook
a friendly neighborhood repository with diverse experiments and adventures in the world of LLMs
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
Fine-tune Mistral 7B to generate fashion style suggestions
A Gradio web UI for Large Language Models. Supports LoRA/QLoRA finetuning,RAG(Retrieval-augmented generation) and Chat
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
End to End Generative AI Industry Projects on LLM Models with Deployment
qwen-1.5-1.8B sentiment analysis with prompt optimization and qlora fine-tuning
Finetuning Google's Gemma Model for Translating Natural Language into SQL
A open-source framework designed to adapt pre-trained Language Models (LLMs), such as Llama, Mistral, and Mixtral, to a wide array of domains and languages.
Code Wizard is a coding companion/ code generation tool empowered by CodeLLama-v2-34B AI to automatically generate and enhance code based on best practices found in your GitHub repository.
Fine-tuning of language models and prompt engineering, using the problem setting of stock price prediction based on high-frequency OHLC stock price data for AAPL.Training gpt-3.5-turbo on OHLC data to obtain raw return and log return predictions.
Code for fine-tuning Llama2 LLM with custom text dataset to produce film character styled responses
Fine tune Phi 2 for persona grounded chat
Finetuning LLMs + Private Data (Video 1/10) Basic
An audio journaling app that provides AI analysis for your journal entries
This repository contains code for fine-tuning the LLama3 8b model using Alpaca prompts to generate Java codes. The code is based on a Google Colab notebook.
This Repo contains How to Finetune Google's New Gemma LLm model using your custom instuction dataset. I have finetuned Gemma 2b instuct Model on 20k medium articles data for 5hrs on kaggle p100 GPU
finetuning t5-base model for detoxifying texts.
The repository contains the code that is used to create the instruct style dataset of telugu news articles.
Add a description, image, and links to the finetuning-llms topic page so that developers can more easily learn about it.
To associate your repository with the finetuning-llms topic, visit your repo's landing page and select "manage topics."