You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A client side vector search library that can embed, store, search, and cache vectors. Works on the browser and node. It outperforms OpenAI's text-embedding-ada-002 and is way faster than Pinecone and other VectorDBs.
IntelliSearch is an advanced retrieval-based question-answering and recommendation system that leverages embeddings and a large language model (LLM) to provide accurate and relevant information to users.
Opus is a fast and modern LLM playgrounds for all embedding and transformers based models like Gemini, GPT, Llama and more community based models from huggingface, built with react.js for the UI and the backend is being handled by robust and cross-platoform runtime environmnet NodeJS
A CLI chatbot that uses RAG architecture for improving and adapting LLM to specific context. It allows users to ask questions and get response directly from open-source LLMs(OpenAI, MistralAI etc.) or from the information on a website which is provided as context using the RAG architecture.
An elegant hybrid search engine that significantly enhances search precision by seamlessly querying semantically related results using embedding AI models. For experiencing AI models and integrations.