Node-RED Flow (and web page example) for the WizardLM AI model
-
Updated
Jul 27, 2023 - HTML
Node-RED Flow (and web page example) for the WizardLM AI model
Craft custom Language Model Models (LLMs) effortlessly using Flock. Build LLMs for specific domains like a pro, supported by wizardlm, bloom, falcon, and llama. Extract insights from text and images seamlessly. Powered by Python, pdfMiner, langChain, and streamLit. Unlock domain-specific intelligence with Flock! 🚀
One-click install for WizardLM-13B-Uncensored with oobabooga webui
Use any LLM from the command line.
A guide about how to use GPTQ models with langchain
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning
The simplest way to serve AI/ML models in production
WebUI for Fine-Tuning and Self-hosting of Open-Source Large Language Models for Coding
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
Add a description, image, and links to the wizardlm topic page so that developers can more easily learn about it.
To associate your repository with the wizardlm topic, visit your repo's landing page and select "manage topics."