QA with RAG using LLMs and Knowledge Bases for Amazon Bedrock
-
Updated
May 21, 2024 - Python
QA with RAG using LLMs and Knowledge Bases for Amazon Bedrock
Dynamic Terraform module, which creates a Opensearch Serverless Collection and all related Resources
Question Answering Generative AI application with Large Language Models (LLMs) and Amazon OpenSearch Serverless Service
Multimodal saree search app built using Amazon Titan Multimodal Embeddings model
An AWS CDK application built to explore OpenSearch Serverless
Typical use cases of opensearch serverelss: search, time-series, kinesis firehose integration, securing with VPC
A demo ChatBot application developed using Amazon Bedrock service's KnowledgeBase, Agent and other AWS's serveless GenAI solution.
Dynamic Terraform module, which creates a Kinesis Firehose Stream and others resources like Cloudwatch, IAM Roles and Security Groups that integrate with Kinesis Firehose. Supports all destinations and all Kinesis Firehose Features.
Fully managed RAG solution implemented using Knowledge Bases for Amazon Bedrock
Bedrock Knowledge Base and Agents for Retrieval Augmented Generation (RAG)
A modular and comprehensive solution to deploy a Multi-LLM and Multi-RAG powered chatbot (Amazon Bedrock, Anthropic, HuggingFace, OpenAI, Meta, AI21, Cohere) using AWS CDK on AWS
Add a description, image, and links to the opensearch-serverless topic page so that developers can more easily learn about it.
To associate your repository with the opensearch-serverless topic, visit your repo's landing page and select "manage topics."