Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks
-
Updated
Apr 11, 2023 - Python
Chain together LLMs for reasoning & orchestrate multiple large models for accomplishing complex tasks
👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR 2023
A data discovery and manipulation toolset for unstructured data
Famous Vision Language Models and Their Architectures
Image captioning using python and BLIP
FiveM Script to allow civilians to dial 911, giving out their location, name, and reason they called, adding a blip to the map too
BLIP image caption demo - medium post blog
This is an innovative project aimed at enhancing the visual experience for individuals with impairments. Leveraging machine learning and natural language processing, this repository houses the codebase for generating efficient and coherent natural language descriptions of captured images. The project integrates seamlessly with image recognition,
Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners
Desafio Take Blip, integração da plataforma Take com api desenvolvida
Collection of OSS models that are containerized into a serving container
Add a description, image, and links to the blip topic page so that developers can more easily learn about it.
To associate your repository with the blip topic, visit your repo's landing page and select "manage topics."