Cross-platform FlashAttention-2 Triton implementation for Turing+ GPUs with custom configuration mode
-
Updated
Jan 12, 2026 - Python
Cross-platform FlashAttention-2 Triton implementation for Turing+ GPUs with custom configuration mode
FlashAttention for sliding window attention in Triton (fwd + bwd pass)
This repository contains multiple implementations of Flash Attention optimized with Triton kernels, showcasing progressive performance improvements through hardware-aware optimizations. The implementations range from basic block-wise processing to advanced techniques like FP8 quantization and prefetching
CUDA 12-first backend inference for Unsloth on Kaggle — Optimized for small GGUF models (1B-5B) on dual Tesla T4 GPUs (15GB each, SM 7.5)
HRM-sMoE LLM training toolkit.
easy naive flash attention without optimization base on origin paper
PyTorch implementation of YOLOv12 with Scaled Dot-Product Attention (SDPA) optimized by FlashAttention for fast and efficient object detection.
200 lines Flash Attention (only forward pass) in CUDA.
FlashAttention2 Analysis in Triton
An minimal CUDA implementation of FlashAttention v1 and v2
A high-performance kernel implementation of multi-head attention using Triton. Focused on minimizing memory overhead and maximizing throughput for large-scale transformer layers. Includes clean-tensor layouts, head-grouping optimisations, and ready-to-benchmark code you can plug into custom models.
A minimal, educational implementation of Ring Attention logic using custom OpenAI Triton kernels. Supports blockwise computation and online softmax merging.
Pytorch implementation of the paper FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Add a description, image, and links to the flashattention topic page so that developers can more easily learn about it.
To associate your repository with the flashattention topic, visit your repo's landing page and select "manage topics."