Skip to content

Why does vllm flash attention build only support CUDA 12.1 (not 11.8) ? #4801

thangld201 started this conversation in General
Discussion options

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
1 participant