Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Templates cannot be outside models directory, models can #2201

Open
Prowler1000 opened this issue Apr 30, 2024 · 2 comments
Open

Templates cannot be outside models directory, models can #2201

Prowler1000 opened this issue Apr 30, 2024 · 2 comments
Labels
bug Something isn't working unconfirmed

Comments

@Prowler1000
Copy link

LocalAI version:
v2.13.0

Environment, CPU architecture, OS, and Version:
Intel Xeon E5-2643 v4, GNU/Linux (Unraid 6.12.10)

Describe the bug
Templates fail to load when located outside the models directory (specified with the LOCALAI_MODELS_PATH environment variable), however models cab be loaded just fine.

To Reproduce

  • Create 3 directories in any directory, (for instance in my specific case /models/configs, /models/models, /models/templates).
  • Set LOCALAI_MODELS_PATH to /models/configs
  • Create a yaml config in /models/config pointing to a model file in /models/models and a template in /models/templates.
  • Try to use the model

Expected behavior
Either no errors, or total failure, not a combination.

Additional context
This seems to have also been an undocumented change that breaks existing installs depending on their file structure.

@Prowler1000 Prowler1000 added bug Something isn't working unconfirmed labels Apr 30, 2024
@localai-bot
Copy link
Contributor

Hello Prowler1000,

Thank you for bringing this issue to our attention in the LocalAI project. We are sorry for any inconvenience this might be causing.

From your description and provided details, it seems that templates cannot be loaded when placed outside the models directory, despite the fact that models can be loaded just fine when located in different directories. This seems to have introduced an unexpected behavior change and might be affecting existing installations that rely on a certain file structure.

To further investigate this issue, we would like to ask you to provide more details on the specific configurations you are using and any error messages you are encountering when trying to use the model. This information will be crucial in helping us understand the root cause of the problem and come up with an appropriate solution.

In the meantime, we recommend that you temporarily revert to the previous behavior by ensuring that templates are within the models directory as a workaround.

Please update the issue with the additional information, and we will review it as soon as possible. We we can address this issue efficiently, please include 'resolved' in your ticket title.

As a bot, this is just one of many experiments @mudler is conducting, so please bear with us as we make improvements.

Thanks for your patience and cooperation.

Best regards,
The LocalAI team 🤖


  • LocalAI version: v2.13.0
  • Environment, CPU architecture, OS, and Version: Intel Xeon E5-2643 v4, GNU/Linux (Unraid 6.12.10)
  • GitHub issue: #2201
  • Labels: [bug unconfirmed]

@Prowler1000
Copy link
Author

This is one of my config yamls from before the update

Llama-3-HOAS.yaml:

name: Llama3-HOAS
mmap: false
parameters:
  model: ../models/Llama3/Meta-Llama-3-8B-Instruct.Q8_0.gguf
  
debug: true
prompt_cache_path: "../cache/Llama3-8b.cache"
prompt_cache_all: true
prompt_cache_ro: false
backend: llama-cpp

threads: 16
embeddings: true

mlock: true

template:
  chat_message: ../templates/Llama3-HOAS/chat_message
  chat: ../templates/Llama3-HOAS/chat
  function: ../templates/Llama3-HOAS/function
  completion: ../templates/Llama3-HOAS/completion
context_size: 8192
stopwords:
- <|eot_id|>

With the file structure as

/
  models/
      config/
          Llama-3-HOAS.yaml
      models/
          Llama3/
              Meta-Llama-3-8B-Instruct.Q8_0.gguf
      templates/
          Llama3-HOAS/
              chat.tmpl
              chat_message.tmpl
              completion.tmpl
              function.tmpl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

No branches or pull requests

2 participants