Files
ComfyUI/comfy/ldm/modules/attention.py
FeepingCreature 7aceb9f91c Add --use-flash-attention flag. (#7223)
* Add --use-flash-attention flag.
This is useful on AMD systems, as FA builds are still 10% faster than Pytorch cross-attention.
2025-03-14 03:22:41 -04:00

36 KiB