Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: add support for byt5 and mt5 #1680

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Conversation

sam-writer
Copy link

closes #1655

Hi, sorry to open a WIP PR, but I needed to run byt5 in TRT, which meant supporting newer versions of transformers. Most of the changes are in the notebook, and I am not sure if the style is in line with your vision. My goal was to support all versions of t5, including mt5 and byt5 and community-made fine-tuned checkpoints. That made it impossible to pre-register all the model names and sizes.

Happy to make any requested modifications.

@sam-writer sam-writer marked this pull request as draft December 22, 2021 22:10
Signed-off-by: sam-writer <sam.havens@writer.com>
@sam-writer
Copy link
Author

In the section Full model inference benchmark, if I set use_cuda=True I sometimes get errors like:

[2021-12-22 21:57:39,889][OSS][WARNING] Unable to execute program using cuda compatible device: The expanded size of the tensor (63) must match the existing size (64) at non-singleton dimension 0. Target sizes: [63]. Tensor sizes: [64]
[2021-12-22 21:57:39,890][OSS][WARNING] Retrying using CPU only.

I am unclear if those were already present, since use_cuda was False.

@sam-writer
Copy link
Author

Also, AFAICT, the TRT implementation of T5's decoder doesn't use past_key_values, which according to the HuggingFace docs speed up decoding.

I assume this is because it would make the decoder variadic, and like ORT, TRT does not like this. If that is the case, could we do something like fastT5 and have 2 decoders: one for the first step, which computes past_key_values but does not use them, and then a decoder for all subsequent steps, which takes and returns past_key_values?

@sam-writer
Copy link
Author

@ttyio are you a good person to ping about this?

@ttyio ttyio requested a review from rajeevsrao January 7, 2022 09:36
@ttyio
Copy link
Collaborator

ttyio commented Jan 7, 2022

@ttyio are you a good person to ping about this?

Thanks for the contribution. but sorry I am not familiar with this sample. Adding @rajeevsrao to review, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Huggingface demos use an older version (4.6.1) of transformers, which has a bug in the implementation of t5
2 participants