llava
Here are 94 public repositories matching this topic...
A Framework of Small-scale Large Multimodal Models
-
Updated
May 18, 2024 - Python
FreeGenius AI, an advanced AI assistant that can talk and take multi-step actions. Supports numerous open-source LLMs via Llama.cpp or Ollama or Groq Cloud API, with optional integration with AutoGen agents, OpenAI API, Google Gemini Pro and unlimited plugins.
-
Updated
May 18, 2024 - Python
RestAI is an AIaaS (AI as a Service) open-source platform. Built on top of LlamaIndex, Ollama and HF Pipelines. Supports any public LLM supported by LlamaIndex and any local LLM suported by Ollama. Precise embeddings usage and tuning.
-
Updated
May 17, 2024 - Python
Your all-in-one platform to build and use AI apps effortlessly on your own computer.
-
Updated
May 17, 2024 - TypeScript
Tag manager and captioner for image datasets
-
Updated
May 17, 2024 - Python
A one-stop data processing system to make data higher-quality, juicier, and more digestible for LLMs! 🍎 🍋 🌽 ➡️ ➡️🍸 🍹 🍷为大语言模型提供更高质量、更丰富、更易”消化“的数据!
-
Updated
May 17, 2024 - Python
An efficient, flexible and full-featured toolkit for fine-tuning large models (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
-
Updated
May 17, 2024 - Python
Open-source evaluation toolkit of large vision-language models (LVLMs), support GPT-4v, Gemini, QwenVLPlus, 40+ HF models, 20+ benchmarks
-
Updated
May 18, 2024 - Python
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks, including end-to-end large-scale multi-modal pretrain models and diffusion model toolbox. Equipped with high performance and flexibility.
-
Updated
May 17, 2024 - Python
Pheye - a family of efficient small vision-language models
-
Updated
May 16, 2024 - Python
jetson-examples running AI models and applications on NVIDIA Jetson devices with one-line command.
-
Updated
May 16, 2024 - Shell
MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.
-
Updated
May 15, 2024 - Python
⚗️ Llava 13b model repository trained by liuhaotian managed by DVC
-
Updated
May 15, 2024 - Python
⚗️ Zephyr 7b model repository trained by HuggingFaceH4 managed by DVC
-
Updated
May 15, 2024 - Python
SUPIR aims at developing Practical Algorithms for Photo-Realistic Image Restoration In the Wild
-
Updated
May 15, 2024 - Python
Voice assistant using Multimodal LLMs - LLaVA-NeXT (Mistral 7B) finetuned & PhoWhisper
-
Updated
May 15, 2024 - Python
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
-
Updated
May 15, 2024 - Python
Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".
-
Updated
May 15, 2024 - Python
Improve this page
Add a description, image, and links to the llava topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llava topic, visit your repo's landing page and select "manage topics."