Skip to content

ttiimmothy/chatgpt-next-website

 
 
icon

ChatGPT Next Website

One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4 & Gemini Pro support.

Web Windows MacOS Linux

Web App / Desktop App / Discord / Twitter

Deploy with Vercel

Deploy on Zeabur

Open in Gitpod

cover

Removedeploypreview

Features

  • Deploy for free with one-click on Vercel in under 1 minute
  • Compact client (~5MB) on Linux/Windows/MacOS, download it now
  • Fully compatible with self-deployed LLMs, recommended for use with RWKV-Runner or LocalAI
  • Privacy first, all data is stored locally in the browser
  • Markdown support: LaTex, mermaid, code highlight, etc.
  • Responsive design, dark mode and PWA
  • Fast first screen loading speed (~100kb), support streaming response
  • New in v2: create, share and debug your chat tools with prompt templates (mask)
  • Awesome prompts powered by awesome-chatgpt-prompts-zh and awesome-chatgpt-prompts
  • Automatically compresses chat history to support long conversations while also saving your tokens
  • I18n: English, 简体中文, 繁体中文, 日本語, Français, Español, Italiano, Türkçe, Deutsch, Tiếng Việt, Русский, Čeština, 한국어, Indonesia

Roadmap

  • System Prompt: pin a user defined prompt as system prompt #138
  • User Prompt: user can edit and save custom prompts to prompt list
  • Prompt Template: create a new chat with pre-defined in-context prompts #993
  • Share as image, share to ShareGPT #1741
  • Desktop App with tauri
  • Self-host Model: Fully compatible with RWKV-Runner, as well as server deployment of LocalAI: llama/gpt4all/rwkv/vicuna/koala/gpt4all-j/cerebras/falcon/dolly etc.
  • Plugins: support network search, calculator, any other apis etc. #165

What's New

  • 🚀 v2.10.1 support Google Gemini Pro model.
  • 🚀 v2.9.11 you can use azure endpoint now.
  • 🚀 v2.8 now we have a client that runs across all platforms!
  • 🚀 v2.7 let's share conversations as image, or share to ShareGPT!
  • 🚀 v2.0 is released, now you can create prompt templates, turn your ideas into reality! Read this: ChatGPT Prompt Engineering Tips: Zero, One and Few Shot Prompting.

Get Started

  1. Get OpenAI API Key;
  2. Click Deploy with Vercel, remember that CODE is your page password;
  3. Enjoy :)

FAQ

English > FAQ

Keep Updated

If you have deployed your own project with just one click following the steps above, you may encounter the issue of "Updates Available" constantly showing up. This is because Vercel will create a new project for you by default instead of forking this project, resulting in the inability to detect updates correctly.

We recommend that you follow the steps below to re-deploy:

  • Delete the original repository;
  • Use the fork button in the upper right corner of the page to fork this project;
  • Choose and deploy in Vercel again, please see the detailed tutorial.

Enable Automatic Updates

If you encounter a failure of Upstream Sync execution, please manually sync fork once.

After forking the project, due to the limitations imposed by GitHub, you need to manually enable Workflows and Upstream Sync Action on the Actions page of the forked project. Once enabled, automatic updates will be scheduled every hour:

Automatic Updates

Enable Automatic Updates

Manually Updating Code

If you want to update instantly, you can check out the GitHub documentation to learn how to synchronize a forked project with upstream code.

You can star or watch this project or follow author to get release notifications in time.

Access Password

This project provides limited access control. Please add an environment variable named CODE on the vercel environment variables page. The value should be passwords separated by comma like this:

code1,code2,code3

After adding or modifying this environment variable, please redeploy the project for the changes to take effect.

Environment Variables

CODE (optional)

Access password, separated by comma.

OPENAI_API_KEY (required)

Your openai api key, join multiple api keys with comma.

BASE_URL (optional)

Default: https://api.openai.com Examples: http://your-openai-proxy.com

Override openai api request base url.

OPENAI_ORG_ID (optional)

Specify OpenAI organization ID.

AZURE_URL (optional)

Example: https://{azure-resource-url}/openai/deployments/{deploy-name}

Azure deploy url.

AZURE_API_KEY (optional)

Azure Api Key.

AZURE_API_VERSION (optional)

Azure Api Version, find it at Azure Documentation.

GOOGLE_API_KEY (optional)

Google Gemini Pro Api Key.

GOOGLE_URL (optional)

Google Gemini Pro Api Url.

HIDE_USER_API_KEY (optional)

Default: Empty

If you do not want users to input their own API key, set this value to 1.

DISABLE_GPT4 (optional)

Default: Empty

If you do not want users to use GPT-4, set this value to 1.

ENABLE_BALANCE_QUERY (optional)

Default: Empty

If you do want users to query balance, set this value to 1.

DISABLE_FAST_LINK (optional)

Default: Empty

If you want to disable parse settings from url, set this to 1.

CUSTOM_MODELS (optional)

Default: Empty Example: +llama,+claude-2,-gpt-3.5-turbo,gpt-4-1106-preview=gpt-4-turbo means add llama, claude-2 to model list, and remove gpt-3.5-turbo from list, and display gpt-4-1106-preview as gpt-4-turbo.

To control custom models, use + to add a custom model, use - to hide a model, use name=displayName to customize model name, separated by comma.

User -all to disable all default models, +all to enable all default models.

Requirements

NodeJS >= 18, Docker >= 20

Development

Open in Gitpod

Before starting development, you must create a new .env.local file at project root, and place your api key into it:

OPENAI_API_KEY=<your api key here>

# if you are not able to access openai service, use this BASE_URL
BASE_URL=https://chatgpt1.nextweb.fun/api/proxy

Local Development

# 1. install nodejs and yarn first
# 2. config local env vars in `.env.local`
# 3. run
yarn install
yarn dev

Deployment

Docker (Recommended)

docker pull yidadaa/chatgpt-next-web

docker run -d -p 3000:3000 \
   -e OPENAI_API_KEY=sk-xxxx \
   -e CODE=your-password \
   yidadaa/chatgpt-next-web

You can start service behind a proxy:

docker run -d -p 3000:3000 \
   -e OPENAI_API_KEY=sk-xxxx \
   -e CODE=your-password \
   -e PROXY_URL=http://localhost:7890 \
   yidadaa/chatgpt-next-web

If your proxy needs password, use:

-e PROXY_URL="http://127.0.0.1:7890 user pass"

Shell

bash <(curl -s https://raw.githubusercontent.com/Yidadaa/ChatGPT-Next-Web/main/scripts/setup.sh)

Synchronizing Chat Records (UpStash)

| 简体中文 | English | Italiano | 日本語 | 한국어

Documentation

Please go to the docs directory for more documentation instructions.

Screenshots

Settings

More

Translation

If you want to add a new translation, read this document.

LICENSE

ChatGPT Next Web is licensed under GNU General Public License v3.0.