Search Skills

Search for skills or navigate to categories

SkillforthatSkillforthat
AI & Machine Learning
O

optimizing-attention-flash

Optimizes transformer attention with Flash Attention

Category

AI & Machine Learning

Author

davila7

Updated

Jan 2026

Tags

6

Install Command

claude skill add davila7/claude-code-templates

Description

Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.

Tags

transformersFlash AttentionGPUinferencememory optimizationPyTorch

Information

Developerdavila7
CategoryAI & Machine Learning
CreatedJan 15, 2026
UpdatedJan 15, 2026

You Might Also Like