Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: UtilGetPpid:1293: Failed to parse #396

Open
MicahZoltu opened this issue May 4, 2024 · 2 comments
Open

ERROR: UtilGetPpid:1293: Failed to parse #396

MicahZoltu opened this issue May 4, 2024 · 2 comments

Comments

@MicahZoltu
Copy link

MicahZoltu commented May 4, 2024

Environment:
Docker 24.0.7 on Ubuntu in WSL 2.1.5.0 on Windows 11

  1. Create Dockerfile.llama:
    FROM ubuntu:24.04@sha256:3f85b7caad41a95462cf5b787d8a04604c8262cdcdf9a472b8c52ef83375fe15
    
    RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
    
    WORKDIR /workspace
    RUN curl -L -o Meta-Llama-3-70B-Instruct.Q4_0.llamafile 'https://huggingface.co/jartine/Meta-Llama-3-70B-Instruct-llamafile/resolve/main/Meta-Llama-3-70B-Instruct.Q4_0.llamafile?download=true'
    RUN chmod 755 Meta-Llama-3-70B-Instruct.Q4_0.llamafile
    
    ENTRYPOINT [ "./Meta-Llama-3-70B-Instruct.Q4_0.llamafile" ]
    
  2. docker image build --file Dockerfile.llama --tag llama .
  3. docker container run --rm -it llama
  4. Notice it exits with an error and nothing is logged (odd).
  5. docker container run --rm -it --entrypoint /bin/bash llama
  6. ./Meta-Llama-3-70B-Instruct.Q4_0.llamafile from inside the docker container.
  7. Notice the following error:

    <3>WSL (9) ERROR: UtilGetPpid:1293: Failed to parse: /proc/1/stat, content: 1 (bash) S 0 1 1 34816 9 4194560 641 230 13 3 1 0 0 0 20 0 1 0 2551832 4698112 959 18446744073709551615 94031756591104 94031757566765 140730947372672 0 0 0 65536 3686404 1266761467 1 0 0 17 3 0 0 0 0 0 94031757789616 94031757838256 94031777792000 140730947374966 140730947374976 140730947374976 140730947375086 0

I get the same error if I use alpine as the base image. The error is coming from inside the container it seems, but it starts with WSL which the container shouldn't have any awareness off. I couldn't find any information online about UtilGetPpid either. The problem could be with any of the following, and I am open to someone helping me to figure out who is at fault!

  • Docker
  • WSL
  • Llamafile
  • Meta-Llama-3-70B-Instruct-Q4 Llamafile
@MicahZoltu
Copy link
Author

I get the same error with TinyLalama-1.1B:

<3>WSL (9) ERROR: UtilGetPpid:1293: Failed to parse: /proc/1/stat, content: 1 (bash) S 0 1 1 34816 9 4194560 741 235 19 4 1 0 0 0 20 0 1 0 2632973 4698112 962 18446744073709551615 94463164870656 94463165846317 140734858901344 0 0 0 65536 3686404 1266761467 1 0 0 17 1 0 0 0 0 0 94463166069168 94463166117808 94463196512256 140734858903387 140734858903397 140734858903397 140734858903534 0

This suggests the problem isn't specific to any one llamafile, but rather a more generalized problem with Llamafiles in Docker, or WSL's involvement somehow.

@MicahZoltu
Copy link
Author

Possibly related, and the only other place I can find useful details about this error:
microsoft/WSL#10073

Perhaps someone who better understands how these llamafiles work will get some insight from the discussion in that issue which seems to talk about how binaries are launched within Docker.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant