Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An 'Internal Server Error' is prompted when calling API interface 1337 #1930

Closed
1xiongyuwen1 opened this issue May 8, 2024 · 19 comments
Closed
Assignees
Labels
bug Something isn't working

Comments

@1xiongyuwen1
Copy link

The API interface address is: http://127.0.0.1:1337/v1/chat/completions
API request parameters is:
{ "model": "gpt-3.5-turbo-16k", "stream": "False", "messages": [ {"role": "assistant", "content": "Hello"} ] }
The result returned by calling the API is:'Internal Server Error
image

@1xiongyuwen1 1xiongyuwen1 added the bug Something isn't working label May 8, 2024
@hlohaus
Copy link
Collaborator

hlohaus commented May 8, 2024

stream: false,
not:
stream: "False"

and your Json is not valid or have too many spaces or line breaks.

@1xiongyuwen1
Copy link
Author

{"model": "gpt-3.5-turbo-16k","stream": "false","messages":[{"role": "assistant", "content": "Hello"}]}
image
No matter if it's stream: "False" or stream: "false", with or without the related spaces, the JSON itself is not affected by spaces. I have tried all the methods, but none of them worked. Has anyone else encountered this issue before?

@xannanov
Copy link

Likely, the model you want to use is not functional

@MockArch
Copy link

Same issue! any update on this ?

image

@hlohaus
Copy link
Collaborator

hlohaus commented May 12, 2024

What do you see in terminal / the error logs? Try use gpt-4 or a another provider/model. Leave the api_key blank.

@MockArch
Copy link

@hlohaus Hey i tried gpt-4 but got same error.

i found that some backend error as below

image

@hlohaus
Copy link
Collaborator

hlohaus commented May 12, 2024

Do you use workers? Can you uninstall uvloop?

@MockArch
Copy link

MockArch commented May 12, 2024

No luck !

Let me explain , how I'm using it.

Im running g4f in docker below is the Dockerfile:
`

# Use the official Python image
FROM python:3.9

# Install g4f with all dependencies
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install g4f[all]
RUN echo y | pip uninstall uvloop


# Command to run the g4f API
#CMD ["python", "-m", "g4f.api.run"]


# Expose port 8080 to map to the container's port 1337
EXPOSE 80

# Print ngrok URL using an ENTRYPOINT script (for development only)
ENTRYPOINT ["/bin/sh", "-c", "python -m g4f.api.run"]
``` `


#docker build -t g4f-api .

#docker run -p 80:1337 -d g4f-api

@MockArch
Copy link

Same is reported here : #1928

@embarce
Copy link

embarce commented May 12, 2024

Same issue maybe async loop version ?

@hlohaus
Copy link
Collaborator

hlohaus commented May 12, 2024

Hey @MockArch, why aren't you using our image?

@asiryan
Copy link

asiryan commented May 12, 2024

I have the same problem

@rafaeluriarte
Copy link

the same here

@embarce
Copy link

embarce commented May 13, 2024

requirements.txt

Here is the requirements file exported from the Docker I used recently. This should help your. I have already rebuilt my own image.

@theDaRkMaN1984
Copy link

theDaRkMaN1984 commented May 14, 2024

Have the same problem. After a while i get an Internal Server Error (500) back from the API.
Ubuntu 24.04
Running via Docker

First some requests are working and after a while it stops

@edferr
Copy link

edferr commented May 14, 2024

@hlohaus Hey i tried gpt-4 but got same error.

i found that some backend error as below

image

same here: i just grabbed a new image, and it all broke. suspect async. what is the official solution?

here is my error text: "2024-05-14 13:21:52.854 INFO: HTTP Request: POST http://localhost:1337/v1/chat/completions "HTTP/1.1 200 OK"
2024-05-14 13:21:52.860 ERROR: Error while streaming response
Traceback (most recent call last):
File "C:\Users\JohnWick3\IdeaProjects\discord-llm-chatbot\llmcord.py", line 322, in on_message
async for chunk in await acompletion(**kwargs):
File "C:\Users\JohnWick3.conda\envs\DiscoBot\Lib\site-packages\litellm\utils.py", line 9973, in anext
raise e
File "C:\Users\JohnWick3.conda\envs\DiscoBot\Lib\site-packages\litellm\utils.py", line 9857, in anext
async for chunk in self.completion_stream:
File "C:\Users\JohnWick3\AppData\Roaming\Python\Python311\site-packages\openai_streaming.py", line 150, in aiter
async for item in self._iterator:
File "C:\Users\JohnWick3\AppData\Roaming\Python\Python311\site-packages\openai_streaming.py", line 181, in stream
raise APIError(
openai.APIError: RetryProviderError: RetryProvider failed:
OpenaiChat: ValueError: Can't patch loop of type <class 'uvloop.Loop'>
ChatgptNext: ValueError: Can't patch loop of type <class 'uvloop.Loop'>
Feedough: ValueError: Can't patch loop of type <class 'uvloop.Loop'>
You: ValueError: Can't patch loop of type <class 'uvloop.Loop'>
Aichatos: ValueError: Can't patch loop of type <class 'uvloop.Loop'>
Koala: ValueError: Can't patch loop of type <class 'uvloop.Loop'>
FreeGpt: ValueError: Can't patch loop of type <class 'uvloop.Loop'>
Cnote: ValueError: Can't patch loop of type <class 'uvloop.Loop'>

any help is definitely appreciated!

@hlohaus
Copy link
Collaborator

hlohaus commented May 14, 2024

Can you uninstall uvloop or use a provider directly?

@edferr
Copy link

edferr commented May 14, 2024

i can use the g4f web chat on port 8080 no problem... also an old image (quite old 2.7.1) works fine. that was the only other docker build i had handy. i think it was fine, up until the last version or two though. i'm pretty sure i need async, but am not well versed enough yet to know the underlyings of uvloop, etc.

other than this, ive been using this project very well. zero other problems.

UPDATE: I found another image 3.0.7 and that one also still works.

FURTHER UPDATE: i am using docker desktop on a windows machine. here is a copy of the WORKING g4f version: 3.0.7 docker run command if useful:

docker run --hostname=e466ba3ab3fa --user=1000 --env=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/g4f/.local/bin --env=DEBIAN_FRONTEND=noninteractive --env=DEBCONF_NONINTERACTIVE_SEEN=true --env=SEL_USER=g4f --env=SEL_UID=1000 --env=SEL_GID=1000 --env=HOME=/home/g4f --env=TZ=UTC --env=SEL_DOWNLOAD_DIR=/Downloads --env=SE_BIND_HOST=false --env=SE_REJECT_UNSUPPORTED_CAPS=false --env=SE_OTEL_JAVA_GLOBAL_AUTOCONFIGURE_ENABLED=true --env=SE_OTEL_TRACES_EXPORTER=otlp --env=LANG_WHICH=en --env=LANG_WHERE=US --env=ENCODING=UTF-8 --env=LANGUAGE=en_US.UTF-8 --env=LANG= --env=SE_ENABLE_BROWSER_LEFTOVERS_CLEANUP=false --env=SE_BROWSER_LEFTOVERS_INTERVAL_SECS=3600 --env=SE_BROWSER_LEFTOVERS_PROCESSES_SECS=7200 --env=SE_BROWSER_LEFTOVERS_TEMPFILES_DAYS=1 --env=SE_DRAIN_AFTER_SESSION_COUNT=0 --env=SE_NODE_MAX_SESSIONS=1 --env=SE_NODE_SESSION_TIMEOUT=300 --env=SE_NODE_OVERRIDE_MAX_SESSIONS=false --env=SE_NODE_HEARTBEAT_PERIOD=30 --env=SE_OTEL_SERVICE_NAME=selenium-node --env=SE_OFFLINE=true --env=SE_SCREEN_WIDTH=1850 --env=SE_SCREEN_HEIGHT=1020 --env=SE_SCREEN_DEPTH=24 --env=SE_SCREEN_DPI=96 --env=SE_START_XVFB=true --env=SE_START_VNC=true --env=SE_START_NO_VNC=true --env=SE_NO_VNC_PORT=7900 --env=SE_VNC_PORT=5900 --env=DISPLAY=:99.0 --env=DISPLAY_NUM=99 --env=CONFIG_FILE=/opt/selenium/config.toml --env=GENERATE_CONFIG=true --env=DBUS_SESSION_BUS_ADDRESS=/dev/null --env=G4F_VERSION= --env=G4F_USER=g4f --env=G4F_USER_ID=1000 --env=G4F_NO_GUI= --env=PYTHONUNBUFFERED=1 --env=G4F_DIR=/app --env=G4F_LOGIN_URL=http://localhost:7900/?autoconnect=1&resize=scale&password=secret --env=SE_DOWNLOAD_DIR=/home/g4f/Downloads --volume=/mnt/c/gpt4free2:/app:rw --network=gpt4free2_default --workdir=/app -p 1337:1337 -p 7900:7900 -p 8080:8080 --restart=no --label='authors=' --label='com.docker.compose.config-hash=3f07d5e95a66f9d2c685d3a54b8dee7f91e653f1960fa8792830c137000a5f94' --label='com.docker.compose.container-number=1' --label='com.docker.compose.depends_on=' --label='com.docker.compose.image=sha256:57e2a18f46603015825882402d1a17929ebb5d1e13464991ede771ab4fce4211' --label='com.docker.compose.oneoff=False' --label='com.docker.compose.project=gpt4free2' --label='com.docker.compose.project.config_files=/mnt/c/gpt4free2/docker-compose.yml' --label='com.docker.compose.project.working_dir=/mnt/c/gpt4free2' --label='com.docker.compose.replace=963c48a32e503da8fb0aea7bdff6fcd0c31cd7bc4e673a1f14a91f603140ec0c' --label='com.docker.compose.service=gpt4free' --label='com.docker.compose.version=2.26.1' --label='desktop.docker.io/wsl-distro=Ubuntu' --runtime=runc -d hlohaus789/g4f:latest

@hlohaus
Copy link
Collaborator

hlohaus commented May 19, 2024

I fixed the uvloop issue.

@hlohaus hlohaus closed this as completed May 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

11 participants