llm-training
Here are 166 public repositories matching this topic...
Explainable Hate Speech Detection using NLP Model Reasoning
-
Updated
Feb 15, 2024 - Jupyter Notebook
Mindful Monk AI Represents a Revolutionary Approach to AI that emphasizes mindfulness, presence and compassion, assisting you with a diverse range of tasks, utilizing a step-by-step-by-step approach. MindfulMonk AI is designed to align seamlessly with your objectives
-
Updated
Mar 7, 2024
Study Uses LLMs to craft accurate, engaging headlines from Reddit Posts
-
Updated
May 5, 2024 - Jupyter Notebook
-
Updated
May 7, 2023 - Python
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
-
Updated
Jan 4, 2024 - Python
4-Bit Finetuning of Large Language Models on One Consumer GPU
-
Updated
May 11, 2023 - Python
Tutorials on how to use language models
-
Updated
Dec 2, 2023 - Jupyter Notebook
This repo contains the code of concepts and projects i learnt from the course "LangChain & Vector Databases in Production", “Training and Fine-tuning LLMs for Production” and "Retrieval Augmented Generation for Production with LangChain & LlamaIndex"
-
Updated
Feb 27, 2024 - Jupyter Notebook
This repo is dedicated to providing open-source tutorials for Large Language Model experimentation.
-
Updated
Dec 12, 2023 - Jupyter Notebook
This chat bot is fine tuned on GPT-2 model to generate responses like Shakespear.
-
Updated
Dec 18, 2023 - Jupyter Notebook
LLM for assisting with form inputs
-
Updated
Oct 10, 2023 - JavaScript
LLMS can mutually compete with and enlighten each other to form a stronger group,Multi agents can be SFTed by other agents' positive samples and guided by self experiences with rl. This shows how human evolved
-
Updated
May 6, 2024 - Python
-
Updated
Jan 16, 2024 - Python
This project utilizes a machine learning model where consumer brand data is employed. Initially, a preliminary model is developed, followed by a refined model using a process called 'fine-tuning' to improve results. Additionally, a comprehensive testing suite has been created to validate accuracy and reliability of the model's predictions.
-
Updated
Feb 8, 2024 - Jupyter Notebook
Pretrain, finetune, deploy 20+ LLMs on your own data. Uses state-of-the-art techniques: flash attention, FSDP, 4-bit, LoRA, and more.
-
Updated
Apr 8, 2024 - Python
This project leverages LLaMA-2, a powerful and versatile language model, as the foundation to create a fine-tuned Large Language Model (LLM) for a medical AI chatbot using a dataset of 5,000 medical questions and answers involves several steps, each crucial for ensuring the model's accuracy, relevance, and safety.
-
Updated
Apr 6, 2024
Improve this page
Add a description, image, and links to the llm-training topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llm-training topic, visit your repo's landing page and select "manage topics."