Gemini Pro LLM and Pinecone Vector Database for fast and performant Retrieval Augmented Generation (RAG) with LlamaIndex
-
Updated
May 20, 2024 - Jupyter Notebook
Gemini Pro LLM and Pinecone Vector Database for fast and performant Retrieval Augmented Generation (RAG) with LlamaIndex
GPT 3.5 Turbo LLM and MongoDB Atlas Vector Search for fast and performant Retrieval Augmented Generation (RAG) with LlamaIndex
Awesome LLM application repo
Enhance your knowledge in medical research with the help of LLM and RAG.
🪢 Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
Improve large language model (LLM) retrieval using dynamic websearch based on blazingly fast query generation from Groq chips ⚡
Sample to envision intelligent apps with Microsoft's Copilot stack for AI-infused product experiences.
The all-in-one LLM developer platform: prompt management, evaluation, human feedback, and deployment all in one place.
Generative AI projects including concepts such as RAG, Fine-tuning, Conversation Retrieval, etc.
The framework for fast development and deployment of RAG systems.
A Blazing Fast AI Gateway. Route to 100+ LLMs with 1 fast & friendly API.
This repository is my platform to learn, experiment, and innovate with LLMs. Here I try to dive in and discover diverse applications, research experiments, and projects fueled by the power of language models.
A custom chatbot using Chainlit and LlamaIndex
LlamaIndex is a data framework for your LLM applications
High quality resources & applications for LLMs, multi-modal models and VectorDBs
Testing Different RAG Applications
Add a description, image, and links to the llama-index topic page so that developers can more easily learn about it.
To associate your repository with the llama-index topic, visit your repo's landing page and select "manage topics."