Issues: facebookresearch/xformers
memory_efficient_attention
: torch.compile
compatibility
#920
opened Nov 9, 2023 by
achalddave
Open
3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Error in installing xformers for google colab connected to a GPU runtime
#1049
opened May 20, 2024 by
UsamaSaddiqu
Request for tutorial on how to modify an attention processor into its xformers version
#1047
opened May 16, 2024 by
JWargrave
Bucketing strategy in triton kernels of sequence parallel fused operators.
#1037
opened Apr 26, 2024 by
Fanoid
Poor performance of sequence parallel fused kernels in real model training
#1036
opened Apr 26, 2024 by
Fanoid
importing
xformers.ops
implicitly initializes CUDA context
#1030
opened Apr 20, 2024 by
function2-llx
BlockDiagonalAttention computes NaN gradients, when using bfloat16 and deterministic torch
#1025
opened Apr 11, 2024 by
nimia
output from memory_efficient_attention not exactly the same with pytorch equivalent implementation
#1024
opened Apr 11, 2024 by
wangh09
ERROR: Could not build wheels for xformers, which is required to install pyproject.toml-based projects
#1023
opened Apr 9, 2024 by
greasebig
Implementation ideas for equivalent replacement from xformers to pytorch
#1021
opened Apr 9, 2024 by
tzayuan
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.