Skip to content

Latest commit

 

History

History
21 lines (14 loc) · 658 Bytes

README.md

File metadata and controls

21 lines (14 loc) · 658 Bytes

rLLM for llama.cpp

This is similar to the CUDA-based rLLM but built on top of llama.cpp.

Building

If you're not using the supplied docker container follow the build setup instructions.

To compile and run first aicirt and then the rllm server, run:

./server.sh phi2

Run ./server.sh --help for more options.

You can also try passing --cuda before phi2, which will enable cuBLASS in llama.cpp. Note that this is different from rllm-cuda, which may give you better performance when doing batched inference.