Replies: 1 comment 1 reply
-
@matttrent thanks for bringing this up. There was a recent feature request about this that can be found here #559 Please make sure to comment on that feature request so that I can better prioritize this! Thanks for participating in the Smart Connections community |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I came across the Smart Connections plugin last week and have been enjoying exploring my vault in a new way. I upgraded to v2.1 and was able to connect it to my local Ollama server and get to test local chat with the new Llama 3 model. It's quite impressive.
With the new local server support in v2.1 for chat, would it be possible to use the same local server for generating embeddings? I'd prefer to keep everything local, and the default CPU-based embedding generation is pretty slow and locks up my Obsidian instance for some time.
I'd love to use the nomic-embed-text model for embeddings, and have it run fast via GPU inference in Ollama. It seems like all the parts are implemented to enable this, but aren't connected in code or available in settings. Any chance this could be added as an option?
Beta Was this translation helpful? Give feedback.
All reactions