We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama_new_context_with_model: n_ctx = 2048 llama_new_context_with_model: n_batch = 2048 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 384.00 MiB llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB llama_new_context_with_model: CPU output buffer size = 0.20 MiB llama_new_context_with_model: CPU compute buffer size = 160.01 MiB llama_new_context_with_model: graph nodes = 921 llama_new_context_with_model: graph splits = 1 fish: Job 1, './moondream2-q8.llamafile -ngl…' terminated by signal SIGILL (Illegal instruction)
And here's What I got On my CPU and the AI model
system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LAMMAFILE = 1 | ","tid":"9430432","timestamp":1714936052,"total_threads":8} {"function":"load_model","level":"INFO","line":432,"msg":"Multi Modal Mode Enabled","tid":"9430432","timestamp":1714936052} clip_model_load: model name: vikhyatk/moondream2 clip_model_load: description: image encoder for vikhyatk/moondream2 clip_model_load: GGUF version: 3
The text was updated successfully, but these errors were encountered:
No branches or pull requests
llama_new_context_with_model: n_ctx = 2048
llama_new_context_with_model: n_batch = 2048
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 384.00 MiB
llama_new_context_with_model: KV self size = 384.00 MiB, K (f16): 192.00 MiB, V (f16): 192.00 MiB
llama_new_context_with_model: CPU output buffer size = 0.20 MiB
llama_new_context_with_model: CPU compute buffer size = 160.01 MiB
llama_new_context_with_model: graph nodes = 921
llama_new_context_with_model: graph splits = 1
fish: Job 1, './moondream2-q8.llamafile -ngl…' terminated by signal SIGILL (Illegal instruction)
And here's What I got On my CPU and the AI model
system_info":"AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LAMMAFILE = 1 | ","tid":"9430432","timestamp":1714936052,"total_threads":8}
{"function":"load_model","level":"INFO","line":432,"msg":"Multi Modal Mode Enabled","tid":"9430432","timestamp":1714936052}
clip_model_load: model name: vikhyatk/moondream2
clip_model_load: description: image encoder for vikhyatk/moondream2
clip_model_load: GGUF version: 3
The text was updated successfully, but these errors were encountered: