You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I briefly changed my hugging face token (via huggingface-cli login) to one that accidentally lacked permissions to download the llama2 model, and model downloads stopped working BUT a successful download was falsely reported:
$ tune download meta-llama/Llama-2-7b-hf --output-dir /tmp/checkpoints2
Ignoring files matching the following patterns: *.safetensors
Successfully downloaded model repo and wrote to the following locations:
The above was the output (with no locations listed) and running tree /tmp/checkpoints2 showed the directory was still empty.
Expected behavior:
Report an error and exit with a non-zero status code when the download fails due to hugging face token permissions
The text was updated successfully, but these errors were encountered:
It looks like Torchtune defers to huggingface_hub to download checkpoints, and handles errors from huggingface_hub.snapshot_download. Would it be worth trying the method directly to see what happens? I've included an example below for the command you shared, which assumes you've run huggingface-cli login. I haven't managed to reproduce your error unfortunately.
from huggingface_hub import snapshot_download
import os
snapshot_download(
"meta-llama/Llama-2-7b-hf",
local_dir="/tmp/checkpoints2",
local_dir_use_symlinks="auto",
ignore_patterns="*.safetensors",
token=os.getenv("HF_TOKEN", None),
)
For me, trying to access meta-llama/Meta-Llama-3-8B, which I don't have access to, outputs:
Cannot access gated repo for url https://huggingface.co/api/models/meta-llama/Meta-Llama-3-8B/revision/main.
Access to model meta-llama/Meta-Llama-3-8B is restricted and you are not in the authorized list. Visit https://huggingface.co/meta-llama/Meta-Llama-3-8B to ask for access.
EDIT: One more thing you could try make sure you do is huggingface-cli logout just incase there's something weird going on there.
I briefly changed my hugging face token (via
huggingface-cli login
) to one that accidentally lacked permissions to download the llama2 model, and model downloads stopped working BUT a successful download was falsely reported:$ tune download meta-llama/Llama-2-7b-hf --output-dir /tmp/checkpoints2 Ignoring files matching the following patterns: *.safetensors Successfully downloaded model repo and wrote to the following locations:
The above was the output (with no locations listed) and running
tree /tmp/checkpoints2
showed the directory was still empty.Expected behavior:
Report an error and exit with a non-zero status code when the download fails due to hugging face token permissions
The text was updated successfully, but these errors were encountered: