-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CLAP Fine-tuning has run into a problem #30795
Comments
Hey @ScottishFold007, in your data collator, #is_longer_batch = self.processor.tokenizer.pad(is_longer_features, return_tensors="pt") This should probably not be commented! |
When this line of code is not commented out, this is the error that appears:
/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) /tmp/ipykernel_1338324/128232537.py in call(self, features) /usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py in pad(self, encoded_inputs, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, verbose) AttributeError: 'list' object has no attribute 'keys' |
According to CLAP docs, What you can do is probably something like this: is_longer_features = [feature["is_longer"] for feature in features]
is_longer_features = torch.tensor(is_longer_features)[...,None] which basically create a tensor and adds an extra-dimension! |
The following error was reported again: ``` RuntimeError: Caught RuntimeError in replica 0 on device 0.
|
Hey @ScottishFold007, could you use |
System Info
transformers
version: 4.39.3Who can help?
@sanchit-gandhi @ylacombe @younesbelkada
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I'm trying to fine-tune the clap, but I'm having some problems with it, and I've previously referenced a solution in #26864
Here is my code:
load data
load model
process data:
Then the following error occurs:
__AttributeError: Caught AttributeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/nn/parallel/parallel_apply.py", line 83, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clap/modeling_clap.py", line 2094, in forward
audio_outputs = self.audio_model(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clap/modeling_clap.py", line 1742, in forward
return self.audio_encoder(
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self.call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/clap/modeling_clap.py", line 913, in forward
is_longer_list = is_longer.to(input_features.device)
AttributeError: 'list' object has no attribute 'to'
I'm having a lot of problems with the mode of enable_fusion=True, and I don't seem to have a good grasp of the handling of the input is_longer, so I hope I can get your pointers on this piece, thanks!
Expected behavior
The model should train normally.
The text was updated successfully, but these errors were encountered: