You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello to everyone.
in my conda env there are either transformers, torch and icecream.
the code that runs this error is the following:
Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("q-future/one-align", trust_remote_code=True)
model(.score'C:\path\to\image.png')
I don't know is it's an issue, but my laptop has no dedicated GPU
Additional logs:
Instantiating LlamaAttention without passing layer_idx is not recommended and will to errors during the forward call, if caching is used. Please make sure to provide a layer_idx when creating this class. LlamaRotaryEmbedding can now be fully parameterized by passing the model config through the config argument. All other arguments will be removed in v4.45
Is there something I'm doing wrong?
The text was updated successfully, but these errors were encountered:
To be compatible with the latest transformers, the following modifications need to be made:
Set mlp_bias to False during the initialization of LlamaConfig.
In q_align/model/modeling_llama2.py, import from transformers.modeling_attn_mask_utils import _prepare_4d_attention_mask_for_sdpa, _prepare_4d_causal_attention_mask_for_sdpa.
Also in q_align/model/modeling_llama2.py, change the lines in several attention methods from:cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)tocos, sin = self.rotary_emb(value_states, position_ids)
Hello to everyone.
in my conda env there are either transformers, torch and icecream.
the code that runs this error is the following:
Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("q-future/one-align", trust_remote_code=True)
model(.score'C:\path\to\image.png')
I don't know is it's an issue, but my laptop has no dedicated GPU
Additional logs:
Instantiating LlamaAttention without passing
layer_idx
is not recommended and will to errors during the forward call, if caching is used. Please make sure to provide alayer_idx
when creating this class.LlamaRotaryEmbedding
can now be fully parameterized by passing the model config through theconfig
argument. All other arguments will be removed in v4.45Is there something I'm doing wrong?
The text was updated successfully, but these errors were encountered: