Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run as deamon / background process #362

Open
JLopeDeB opened this issue Apr 24, 2024 · 1 comment
Open

Run as deamon / background process #362

JLopeDeB opened this issue Apr 24, 2024 · 1 comment

Comments

@JLopeDeB
Copy link

Hello, and thanks for this awesome project.

I successfully run llamafile models in a server, but is dependent on the SSH session I'm running it on. I'd like to know if the models could be executed as background processes, and in such case what would be a good option to do it. I browsed the issues and docs but could find an option for it.
The goal is to have an LLM service running in a server and consume it via REST API, but I'm afraid the project might not be intended for that purpose.

Hope you can help me, thanks!

@cjpais
Copy link
Contributor

cjpais commented Apr 24, 2024

There are many ways to do this. You could use utilities like screen, tmux, or run the command as normal adding an & at the end of the command ./llamafile <params> &

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants