The AI-native database built for LLM applications, providing incredibly fast full-text and vector search
-
Updated
May 17, 2024 - C++
The AI-native database built for LLM applications, providing incredibly fast full-text and vector search
How to build a simplified Corrective RAG assistant with Amazon Bedrock using LLMs, Embeddings model, Knowledge Bases for Amazon Bedrock, and Agents for Amazon Bedrock.
Radient turns many data types (not just text) into vectors for similarity search, clustering, regression analysis, and more.
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
All-in-one infrastructure for building search, recommendations, and RAG. Trieve combines search language models with tools for tuning ranking and relevance.
Minimalist web-searching app with an AI assistant that runs directly from your browser. Uses Web-LLM, Ratchet-ML, Wllama and SearXNG. Demo: https://felladrin-minisearch.hf.space
Providing enterprise-grade LLM-based development framework, tools, and fine-tuned models.
🔍 LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
TutorAI is a RAG system capable of assisting with learning academic subjects and using the curriculum and citing it. The project revolves around building an application that ingests a textbook in most formats and facilitates efficient learning of the course material.
Backend library for conversational AI in biomedicine
LLM App templates for RAG, knowledge mining, and stream analytics. Ready to run with Docker,⚡in sync with your data sources.
Advanced RAG Pipelines
AWS Generative AI CDK Constructs are sample implementations of AWS CDK for common generative AI patterns.
An app to share your investment portfolio with your friends!
Empower Large Language Models (LLM) using Knowledge Graph based Retrieval-Augmented Generation (KG-RAG) for knowledge intensive tasks
Generative AI Application Builder on AWS facilitates the development, rapid experimentation, and deployment of generative artificial intelligence (AI) applications without requiring deep experience in AI. The solution includes integrations with Amazon Bedrock and its included LLMs, such as Amazon Titan, and pre-built connectors for 3rd-party LLMs.
Chat with your PDF files for free, using Langchain, Groq, ChromaDB, and Jina AI embeddings.
The creative suite for character-driven AI experiences.
⚡️Fast persistent storage of multiple document embeddings and their metadata into Pinecone for production-level RAG.
Build your own serverless AI Chat with Retrieval-Augmented-Generation using LangChain.js, TypeScript and Azure
Add a description, image, and links to the retrieval-augmented-generation topic page so that developers can more easily learn about it.
To associate your repository with the retrieval-augmented-generation topic, visit your repo's landing page and select "manage topics."