Skip to content

SD.Next extension to send compute tasks to remote inference servers

License

Notifications You must be signed in to change notification settings

BinaryQuantumSoul/sdnext-remote

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

sdnext-remote

CC BY-NC-SA 4.0

SD.Next extension to send compute tasks to remote inference servers. Aimed to be universal for all providers, feel free to request other providers.

Note

This project is still a Work In Progress, please report issues.

Providers

Features

SD.Next API ComfyUI API StableHorde NovitaAI ComfyICU
Model browsing
Checkpoints browser 🆗
Loras browser 🆗
Embeddings browser 🆗
Generation
Txt2img 🆗+
Second pass (hires) 🆗+ 🆗 🆗
Second pass (refiner) 🆗 🆗 🆗+ 🆗
Img2img 🆗+ 🆗+
Inpainting 🆗+ 🆗+ 🆗+
Outpainting 🆗 🆗 🆗 🆗+ 🆗
Upscale & Postprocess 🆗 🆗 🆗 🆗
AnimateDiff 🆗 🆗 🆗
Generation control
Loras and TIs 🆗 🆗
ControlNet 🆗 🆗 ⚠️ ⚠️ 🆗
IpAdapter 🆗 🆗 🆗+ 🆗
User
Balance (credits/kudos)
Generation cost estimation 🆗 🆗
  • ✅ functional
  • ⚠️ partial functionality
  • 🆗+ work in progress
  • 🆗 roadmap
  • ⭕ not needed
  • ❌ not supported

Additional features

  • StableHorde worker settings
  • Dynamic samplers/upscalers lists
  • API calls caching
  • Hide NSFW networks option

Why yet another extension ?

There already are plenty of integrations of AI Horde. The point of this extension is to bring all remote providers into the same familiar UI instead of relying on other websites. Eventually I'd also like to add support for other SD.Next extensions like dynamic prompts, deforum, tiled diffusion, adetailer and regional prompter (UI extensions like aspect ratio, image browser, canvas zoom or openpose editor should already be supported).

Installation & usage

  1. Installation
    1. Go to extensions > manual install > paste https://github.com/BinaryQuantumSoul/sdnext-remote > install
    2. Go to extensions > manage extensions > apply changes & restart server
    3. Go to system > settings > remote inference > set right api endpoints & keys
  2. Usage
    1. Select desired remote inference service in dropdown, refresh model list and select model
    2. Set generations parameters as usual and click generate

Note

You can launch SDNext with --debug to follow api requests

License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0