Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

checkpoint str has no attribute 'get' #3444

Open
antmikinka opened this issue May 1, 2024 · 6 comments
Open

checkpoint str has no attribute 'get' #3444

antmikinka opened this issue May 1, 2024 · 6 comments
Assignees
Labels
module: coreml Issues related to Apple's Core ML delegation

Comments

@antmikinka
Copy link

I was following the llama2 7b guide, consenus not enough ram and other issues.
tried the stories110M guide, worked all the way till I went to test it.
I may remember lm_eval not being installed (its what my terminal said) not sure if that could be causing anything
I am trying to eval model accuracy, and that is where this error is stemming from.

file I am using to save the .pte

import torch

with open("llama2_coreml_all.pte", 'wb') as file:
	torch.save('f',"llama2_coreml_all.pte", _use_new_zipfile_serialization=True)

script and terminal info

❯ python -m examples.models.llama2.eval_llama -c llama2_coreml_all.pte -p params.json -t tokenizer.model -d fp32 --max_seq_len 512 --limit 100
Could not import fairseq2 modules.
2024-04-30:23:20:24,518 INFO     [builder.py:80] Loading model with checkpoint=llama2_coreml_all.pte, params=params.json, use_kv_cache=False, weight_type=WeightType.LLAMA
Traceback (most recent call last):
  File "/opt/anaconda3/envs/executorch/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/anaconda3/envs/executorch/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Users/anthonymikinka/executorch/examples/models/llama2/eval_llama.py", line 29, in <module>
    main()  # pragma: no cover
  File "/Users/anthonymikinka/executorch/examples/models/llama2/eval_llama.py", line 25, in main
    eval_llama(modelname, args)
  File "/Users/anthonymikinka/executorch/examples/models/llama2/eval_llama_lib.py", line 261, in eval_llama
    eval_wrapper = gen_eval_wrapper(model_name, args)
  File "/Users/anthonymikinka/executorch/examples/models/llama2/eval_llama_lib.py", line 209, in gen_eval_wrapper
    manager: LlamaEdgeManager = _prepare_for_llama_export(model_name, args)
  File "/Users/anthonymikinka/executorch/examples/models/llama2/export_llama_lib.py", line 629, in _prepare_for_llama_export
    load_llama_model(
  File "/Users/anthonymikinka/executorch/examples/models/llama2/builder.py", line 83, in load_llama_model
    model, example_inputs, _ = EagerModelFactory.create_model(
  File "/Users/anthonymikinka/executorch/examples/models/model_factory.py", line 44, in create_model
    model = model_class(**kwargs)
  File "/Users/anthonymikinka/executorch/examples/models/llama2/model.py", line 84, in __init__
    if (not fairseq2_checkpoint) and checkpoint.get(
AttributeError: 'str' object has no attribute 'get'
@cccclai
Copy link
Contributor

cccclai commented May 1, 2024

@Jack-Khuu is the on-device evaluation ready?

edit: Acutally coreml should be able to run on Mac too, @antmikinka are you looking for on device evaluation, or just evaluate the coreml model either on Mac or iPone?

@antmikinka
Copy link
Author

antmikinka commented May 2, 2024

@cccclai

Yes, I'm trying to see an evaluation for the model on the Mac. I would like to put the model on my iPhone (iPhone 13 Pro) as well.

I was trying to determine what hardware (cpu/gpu/ane) was being utilized to compute the model.

@cccclai cccclai added the module: coreml Issues related to Apple's Core ML delegation label May 2, 2024
@DawerG
Copy link

DawerG commented May 2, 2024

Could not import fairseq2 modules.

Seems an issue with the executorch setup.

@Jack-Khuu
Copy link
Contributor

@Jack-Khuu is the on-device evaluation ready?

Eval is ready, but this error doesn't seem to be related to eval. It fails during load_llama_model, prior to eval
I'll try to narrow it down and loop in core

@cccclai
Copy link
Contributor

cccclai commented May 3, 2024

I think it's related to how we expect eval to work with delegated model, in this case coreml

@Jack-Khuu
Copy link
Contributor

Just as an update so this doesn't go stale, investigating CoreML Eval is on our plate

Will update as things flesh out

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: coreml Issues related to Apple's Core ML delegation
Projects
None yet
Development

No branches or pull requests

4 participants