writing kernels for fun!
but need a job right now
xiaomingchinafun@outlook.com
-
BUPT
- Beijing
Popular repositories Loading
-
flash_attn_cuda
flash_attn_cuda Publiceasy naive flash attention without optimization base on origin paper
Cuda 5
-
-
-
pytorch
pytorch PublicForked from pytorch/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Python
-
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.
