Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor turbomind attention by precomputing rotary embed #2801

Open
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

irexyc
Copy link
Collaborator

@irexyc irexyc commented Nov 25, 2024

Motivation

Calculate cos/sin in advance and reduce the parameters of the prefill/decode kernel

@lvhan028 lvhan028 changed the title Use precomputed cos/sin Refactor turbomind attention by precomputing cos/sin Nov 27, 2024
@lvhan028
Copy link
Collaborator

lvhan028 commented Dec 3, 2024

"internlm/internlm2_5-7b-chat-1m" long-context test failed

@lzhangzz
Copy link
Collaborator

lzhangzz commented Dec 3, 2024

Do we really need float for RoPE? Most models store them as half or bf16.

@lvhan028 lvhan028 changed the title Refactor turbomind attention by precomputing cos/sin Refactor turbomind attention by precomputing rotary embed Dec 4, 2024
@zhulinJulia24
Copy link
Collaborator

@irexyc run generation,

CUDA_VISIBLE_DEVICES=4,5 python3 benchmark/profile_generation.py /nvme/qa_test_models/mistralai/Mixtral-8x7B-Instruct-v0.1 --backend pytorch -c 8 256 -ct 128 128 2048 128 -pt 1 128 128 2048 --tp 2 --cache-max-entry-count 0.8

only use 1 gpu and process hangs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants