Skip to content

This repository contains question-answers model as an interface which retrieves answers from vector database for a question. Embeddings or tokenised vector being computed using OpenAI API call which gets inserted into ChromaDB as a RAG. OpenAI API key would be required to run this service.

License

Notifications You must be signed in to change notification settings

navneet1083/qaml

Repository files navigation

Implementation of Question Answers model through RAG

This project uses OpenAI's embedding vector for storing data through API call. It required OPENAI_API_KEY to be set in environment variable. It uses RAG with the combination of LangChain for retrival of embedding vector from vector database.

Folder structure

notebooks:

  • It consists of jupyter notebook
  • Different test been tried w.r.t to Generative AI models (like flan, qa, bert etc)

resources:

  • It consists of extra resources (like templates; sample questions)
  • It also consists of chromaDB on-disk files

configs:

  • configuration files

main.py:

  • It's a main function call

Technology Stack

  • fastAPI been used as a microservice
  • Langchain for building pipeline across sections of generative AI model
  • ChromaDB for vector database storage
  • OpenAI for API call
  • RAG for retrieval
  • FLAN-T5 for fine-tuning model
  • BERT for fine-tuning model (roberta flavours)

About

This repository contains question-answers model as an interface which retrieves answers from vector database for a question. Embeddings or tokenised vector being computed using OpenAI API call which gets inserted into ChromaDB as a RAG. OpenAI API key would be required to run this service.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published