Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for subquadratic attention methods such as Linear Attention #107

Closed
kabachuha opened this issue Mar 18, 2024 · 1 comment
Closed
Labels
enhancement New feature or request

Comments

@kabachuha
Copy link

Hello!

As you probably know, there are developments proposing to switch away from the traditional transformer's attention architecture due to its quadratic context cost. While the approaches such as Mamba are too exotic and may be too complicated for the existing pipelines, such as ControlNet-Transformer, other sub-quadratic alternatives have been proposed recently. An example is ReBased Linear Transformers with Learnable kernels https://github.com/corl-team/rebased which seems to fare better than Mamba

Also it may be worth to take a look at Large World Model's ring attention https://github.com/lucidrains/ring-attention-pytorch enabling it to extend its context window to millions of tokens while reliably answering the needle in the haystack test

Here's my implementation for Latte Vchitect/Latte#51

@zhengzangw
Copy link
Collaborator

We will look into this, but not recently as there are tasks with higher priority.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants