Skip to content

✨ A curated list of awesome community resources, integrations, and examples of Redis in the AI ecosystem.

License

Notifications You must be signed in to change notification settings

redis-developer/redis-ai-resources

Repository files navigation

AI Resources

License: MIT Language GitHub last commit

✨ A curated repository of code recipes, demos, and resources for basic and advanced Redis use cases in the AI ecosystem. ✨

Table of Contents


Demos

No faster way to get started than by diving in and playing around with one of our demos.

Demo Description
ArxivChatGuru Streamlit demo of RAG over Arxiv documents with Redis & OpenAI
Redis VSS - Simple Streamlit Demo Streamlit demo of Redis Vector Search
Vertex AI & Redis A tutorial featuring Redis with Vertex AI
Agentic RAG A tutorial focused on agentic RAG with LlamaIndex and Cohere
ArXiv Search Full stack implementation of Redis with React FE
Product Search Vector search with Redis Stack and Redis Enterprise

Recipes

Need specific sample code to help get started with Redis? Start here.

Getting started with RAG

Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user’s query, serving as contextual information to augment the generative capabilities of an LLM.

To get started with RAG, either from scratch or using a popular framework like Llamaindex or LangChain, go with these recipes:

Recipe Description
/00_intro_redispy Introduction to vector search using the standard redis python client
/01_redisvl RAG from scratch with the Redis Vector Library
/02_langchain RAG using Redis and LangChain
/03_llamaindex RAG using Redis and LlamaIndex
/04_advanced_redisvl Advanced RAG with redisvl
/05_nvidia_ai_rag_redis RAG using Redis and Nvidia

Semantic Cache

An estimated 31% of LLM queries are potentially redundant (source). Redis enables semantic caching to help cut down on LLM costs quickly.

Recipe Description
/semantic_caching_gemini Build a semantic cache with Redis and Google Gemini

Advanced RAG

For further insights on enhancing RAG applications with dense content representations, query re-writing, and other techniques.

Recipe Description
/advanced_RAG Notebook for additional tips and techniques to improve RAG quality

Recommendation systems

An exciting example of how Redis can power production-ready systems is highlighted in our collaboration with NVIDIA to construct a state-of-the-art recommendation system.

Within this repository, you'll find three examples, each escalating in complexity, showcasing the process of building such a system.

Integrations/Tools

  • ⭐ RedisVL - a dedicated Python client lib for Redis as a Vector DB.
  • ⭐ AWS Bedrock - Streamlines GenAI deployment by offering foundational models as a unified API.
  • ⭐ LangChain Python - popular Python client lib for building LLM applications. powered by Redis.
  • ⭐ LangChain JS - popular JS client lib for building LLM applications. powered by Redis.
  • ⭐ LlamaIndex - LlamaIndex Integration for Redis as a vector Database (formerly GPT-index).
  • Semantic Kernel - popular lib by MSFT to integrate LLMs with plugins.
  • RelevanceAI - Platform to ag, search and analyze unstructured data faster, built on Redis.
  • DocArray - DocArray Integration of Redis as a VectorDB by Jina AI.

Additional content

Benchmarks

Documentation