You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
error
Traceback (most recent call last):
File "", line 1, in
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\client\client.py", line 114, in create
return response if stream else next(response)
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\client\client.py", line 53, in iter_append_model_and_provider
for chunk in response:
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\client\client.py", line 28, in iter_response
for idx, chunk in enumerate(response):
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\providers\base_provider.py", line 216, in create_completion
yield loop.run_until_complete(await_callback(gen.anext))
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete
return future.result()
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\providers\base_provider.py", line 45, in await_callback
return await callback()
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\Provider\needs_auth\Openai.py", line 56, in create_async_generator
await raise_for_status(response)
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\requests\raise_for_status.py", line 28, in raise_for_status_async
raise ResponseStatusError(f"Response {response.status}: {message}")
g4f.errors.ResponseStatusError: Response 403: {"detail":{"error":"Not authenticated"}}
The text was updated successfully, but these errors were encountered:
Hey there! It seems like the free GPT model might not be accessible in your region. Have you tried using the Llama model? Let me know if that works for you!
Hey there! It seems like the free GPT model might not be accessible in your region. Have you tried using the Llama model? Let me know if that works for you!
my code
client = Client(provider='DeepInfra')
response_q = client.chat.completions.create(model="meta-llama/Meta-Llama-3-70B-Instruct",
extra_body={"provider": "DeepInfra"},
messages=[{"role": "user", "content": prompt}])
error
Traceback (most recent call last):
File "", line 1, in
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\client\client.py", line 114, in create
return response if stream else next(response)
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\client\client.py", line 53, in iter_append_model_and_provider
for chunk in response:
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\client\client.py", line 28, in iter_response
for idx, chunk in enumerate(response):
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\providers\base_provider.py", line 216, in create_completion
yield loop.run_until_complete(await_callback(gen.anext))
File "C:\Users\lenovo\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete
return future.result()
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\providers\base_provider.py", line 45, in await_callback
return await callback()
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\Provider\needs_auth\Openai.py", line 56, in create_async_generator
await raise_for_status(response)
File "c:\Users\lenovo\Desktop\Projects\GTP4FREE\venv\lib\site-packages\g4f\requests\raise_for_status.py", line 28, in raise_for_status_async
raise ResponseStatusError(f"Response {response.status}: {message}")
g4f.errors.ResponseStatusError: Response 403: {"detail":{"error":"Not authenticated"}}
The text was updated successfully, but these errors were encountered: