Static builds of llama.cpp (Currently only amd64 server builds are available)
-
Updated
Apr 30, 2024 - Dockerfile
Static builds of llama.cpp (Currently only amd64 server builds are available)
A Genshin Impact Question Answer Project supported by Qwen1.5-14B-Chat
UnOfficial Gradio Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
Ask LLaMa about image in your clipboard
Some useful apps containerized.
Presentation on Artificial Intelligence for the Free Drawing and Print Graphics class of the Muthesius Academy of Art.
Repo to download, save and run quantised LLM models using Llama.cpp and benchmark the results (private use)
A chatbot with the ability to vocally respond (TTS) using llama
PowerShell automation to download large language models (LLMs) from Git repositories and quantize them with llama.cpp into the GGUF format.
LLM content classification with only prompt engineering
Llama-2 on apple mac using gpu
little single file fronted for llama.cpp/examples/server created with vue-taildwincss and flask
Lightweight implementation of the OpenAI open API on top of local models
Add a description, image, and links to the llama-cpp topic page so that developers can more easily learn about it.
To associate your repository with the llama-cpp topic, visit your repo's landing page and select "manage topics."