-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Response needs to process functions [panic: Unrecognized schema: map[]] #2223
Comments
I apologize for the inconvenience. It seems like there might be an issue with the schema processing. I suggest trying to update LocalAI to the latest version or reinstalling it to resolve this issue. If the problem persists, please open a new ticket with specific details about the debug logs and the steps to reproduce the issue, so we can look into this further. Thank you for bringing this to our attention, and I assure you that we'll work on solving this problem as quickly as possible. |
Same issue there, |
Unfortunately, it is still an issue with the 2024-05-07T11:38:35.997Z |
Same issue,follow the example of function . |
Also facing this issue. Occurs regardless of model used, Ive tried several that are supposed to work with a function. Should be noted that when interfacing with openais official API instead, it works as expected. |
@mudler can you take a look? It seems something common to all the backends and models. |
Is this happening only with transformers models I suppose, right? function calls are automatically tested by the CI (however, just with llama.cpp as runs easily on the runners). The functions calls maps automatically to grammars which are currently supported only by llama.cpp, however, I think you should be able to disable that behavior by turning of grammars, and extract tool arguments from the LLM responses, by specifying in the YAML file: function:
no_grammar: true
response_regex: "..." The response regex have to be a regex with named parameters to allow to scan the function name and the arguments. For instance, consider:
will catch
Update: I've updated the docs now to mention this specific setting here: https://localai.io/features/openai-functions/#use-functions-without-grammars |
Anyone know how to apply this "no_grammar" fix to the OpenAI extended addon for HA? I added the YAML lines to my model.yaml with no luck:
Edit1: I tried with this one as well with no luck:
Setting functions to "0" works, but setting it to 1 makes it crash with the MAP error. |
If it will help, This is the the request body sent by jeklamin's integration:
And from the LocalAI Log:
Edited cause I was figuring out Github's Markdown |
Okay, I did some more digging. I found evidence of this working as recently as february on a blog post, and proceeded to pull 2.8.2 and implement is as described here: https://theawesomegarage.com/blog/configure-a-local-llm-to-control-home-assistant-instead-of-chatgpt . Requests work fine in this release, while the same model and config does not work on latest. Some change between then and now seems to have broken the functionality. |
I found that removing function_call from the request can avoid this issue. |
Well yeah, but then that kind of defeats the purpose. You need the function call to actually control ha. |
It seems to work with this older version: 2.8.2 |
Several reports are with llama.cpp, this is why I doubt it's a transformer issue. |
2.8.2 seems to work fine |
No luck for me on the proposed fix. I'm working to pull 2.8.2 and give that a try. |
Are you able to get the functions to work at all? For me it doesn't do the error but functions are not called I think |
I just tested the example of function, blow is the two responses:
second respone:
seems to work fine. |
I see! What about the "execute_services" one? I tried something simple like "Turn off the office lights" which works with OpenAI with the same configuration. This is my function:
|
Whether or not the functions work depends heavily on the model and whether you have the entity exposed. |
Hi, I'm the author of the blog post mentioned above (theawesomegarage). I had the Extended OpenAI Conversation Integration working back in February, installed local.ai just now on a new server with he latest version of LocalAI, and I get the same panic error, and my localai docker container crashes. The Home Assistant integration works as expected if I instead use the actual OpenAI API as my back-end or the old version of local.ai. If I disable function calls in the Extended OpenAI Conversation Integration, local-ai doesn't crash anymore, but I can't interact with Home Assistant either - just have a pleasant conversation with the AI. |
Good to see you here! Have to thank you for that post, cause it got me started on the concept of local chatgpt integration with home assistant. I have found an alternative to localai for the moment... Using Ollama with the Llama Conversation custom integration by acon96. I do hope the Local AI compatibility gets sorted though, as I prefer it over Ollama |
I can't reproduce it here - however in the new LocalAI version I focused on enhancing tool support: now you can also disable grammars if the model support entirely. I've tested with hermes, and for example if this issue persist when using grammars, you can disable it as such (but note that if the LLM fails in replying with valid JSON by hallucinating it will break): name: nous-hermes
mmap: true
parameters:
model: huggingface://NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF/Hermes-2-Pro-Llama-3-8B-Q4_K_M.gguf
context_size: 8192
stopwords:
- "<|im_end|>"
- "<dummy32000>"
- "</tool_call>"
- "<|eot_id|>"
- "<|end_of_text|>"
function:
# disable injecting the "answer" tool
disable_no_action: true
grammar:
# This allows the grammar to also return messages
#mixed_mode: true
disable: true
# Suffix to add to the grammar
#prefix: '<tool_call>\n'
# Force parallel calls in the grammar
# parallel_calls: true
return_name_in_function_response: true
# Without grammar uncomment the lines below
# Warning: this is relying only on the capability of the
# LLM model to generate the correct function call.
json_regex_match:
- "(?s)<tool_call>(.*?)</tool_call>"
- "(?s)<tool_call>(.*?)"
replace_llm_results:
# Drop the scratchpad content from responses
- key: "(?s)<scratchpad>.*</scratchpad>"
value: ""
replace_function_results:
# Replace everything that is not JSON array or object
#
- key: '(?s)^[^{\[]*'
value: ""
- key: '(?s)[^}\]]*$'
value: ""
- key: "'([^']*?)'"
value: "_DQUOTE_${1}_DQUOTE_"
- key: '\\"'
value: "__TEMP_QUOTE__"
- key: "\'"
value: "'"
- key: "_DQUOTE_"
value: '"'
- key: "__TEMP_QUOTE__"
value: '"'
# Drop the scratchpad content from responses
- key: "(?s)<scratchpad>.*</scratchpad>"
value: ""
template:
chat: |
{{.Input -}}
<|im_start|>assistant
chat_message: |
<|im_start|>{{if eq .RoleName "assistant"}}assistant{{else if eq .RoleName "system"}}system{{else if eq .RoleName "tool"}}tool{{else if eq .RoleName "user"}}user{{end}}
{{- if .FunctionCall }}
<tool_call>
{{- else if eq .RoleName "tool" }}
<tool_response>
{{- end }}
{{- if .Content}}
{{.Content }}
{{- end }}
{{- if .FunctionCall}}
{{toJson .FunctionCall}}
{{- end }}
{{- if .FunctionCall }}
</tool_call>
{{- else if eq .RoleName "tool" }}
</tool_response>
{{- end }}<|im_end|>
completion: |
{{.Input}}
function: |-
<|im_start|>system
You are a function calling AI model.
Here are the available tools:
<tools>
{{range .Functions}}
{'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }}
{{end}}
</tools>
You should call the tools provided to you sequentially
Please use <scratchpad> XML tags to record your reasoning and planning before you call the functions as follows:
<scratchpad>
{step-by-step reasoning and plan in bullet points}
</scratchpad>
For each function call return a json object with function name and arguments within <tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
{{.Input -}}
<|im_start|>assistant To note: I've updated the hermes models in the gallery with the mixed JSON grammar support that was introduced in 2.16.0 and feedback is welcome. You can also find now two new models in the model gallery that are fine-tuned to leverage entirely JSON grammar support of LocalAI that you can find both in the model gallery: |
LocalAI version:
quay.io/go-skynet/local-ai:master-sycl-f16-ffmpeg
Environment, CPU architecture, OS, and Version:
Docker on Proxmox LXC w/ iGPU pass-through i3-N300 32GB RAM, LXC 6 cores, 16GB ram
Describe the bug
using Extended OpenAI Conversation Integration by @jekalmin here, LocalAI crashes when using a function generation
model used fakezeta/Phi3-openvino-int8 (thank you @fakezeta !!)
NOTE: removing the function completely, the assistant works well for conversation
To Reproduce
once specified the entities list and state, asking turning on a light, make localAI crash
Expected behavior
Home Assistant using Extended OpenAI Conversation Integration, produces HA-compatible function calls.
Logs
Additional context
The text was updated successfully, but these errors were encountered: