-
Notifications
You must be signed in to change notification settings - Fork 249
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Change the name of the Infinity-Instruct-7M-0729-Models to Infinity-I…
…nstruct-7M-Gen-Models (#387) * Change the name of the Infinity-Instruct-7M-0729-Models to Infinity-Instruct-7M-Gen-Models * Delete Infinity-Instruct-7M-0729-Models --------- Co-authored-by: Yuhui Zheng <cs.yuhuizheng@my.cityu.edu.hk>
- Loading branch information
Showing
14 changed files
with
4,856 additions
and
4,856 deletions.
There are no files selected for viewing
1,610 changes: 805 additions & 805 deletions
1,610
...t-7M-0729-Llama3_1-70B/model_outputs.json → ...ct-7M-Gen-Llama3_1-70B/model_outputs.json
Large diffs are not rendered by default.
Oops, something went wrong.
1,610 changes: 805 additions & 805 deletions
1,610
...d_alpaca_eval_gpt4_turbo/annotations.json → ...d_alpaca_eval_gpt4_turbo/annotations.json
Large diffs are not rendered by default.
Oops, something went wrong.
1,610 changes: 805 additions & 805 deletions
1,610
...ct-7M-0729-Llama3_1-8B/model_outputs.json → ...uct-7M-Gen-Llama3_1-8B/model_outputs.json
Large diffs are not rendered by default.
Oops, something went wrong.
1,610 changes: 805 additions & 805 deletions
1,610
...d_alpaca_eval_gpt4_turbo/annotations.json → ...d_alpaca_eval_gpt4_turbo/annotations.json
Large diffs are not rendered by default.
Oops, something went wrong.
1,610 changes: 805 additions & 805 deletions
1,610
...uct-7M-0729-mistral-7B/model_outputs.json → ...ruct-7M-Gen-mistral-7B/model_outputs.json
Large diffs are not rendered by default.
Oops, something went wrong.
1,610 changes: 805 additions & 805 deletions
1,610
...d_alpaca_eval_gpt4_turbo/annotations.json → ...d_alpaca_eval_gpt4_turbo/annotations.json
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
13 changes: 0 additions & 13 deletions
13
src/alpaca_eval/models_configs/Infinity-Instruct-7M-0729-Llama3_1-70B/configs.yaml
This file was deleted.
Oops, something went wrong.
10 changes: 5 additions & 5 deletions
10
...Instruct-7M-0729-Llama3_1-8B/configs.yaml → ...Instruct-7M-Gen-Llama3_1-70B/configs.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,13 +1,13 @@ | ||
Infinity-Instruct-7M-0729-Llama3_1-8B: # this should be the same as the name as the current directory | ||
prompt_template: "Infinity-Instruct-7M-0729-Llama3_1-8B/prompt.txt" # what prompt should be used for this model | ||
Infinity-Instruct-7M-Gen-Llama3_1-70B: # this should be the same as the name as the current directory | ||
prompt_template: "Infinity-Instruct-7M-Gen-Llama3_1-70B/prompt.txt" # what prompt should be used for this model | ||
fn_completions: "openai_completions" # what function should be used to generate completions. See `src/alpaca_eval/decoders` for options | ||
completions_kwargs: # parameters to the completion function | ||
model_name: "baai/Infinity-Instruct-7M-0729-Llama3_1-8B" | ||
model_name: "baai/Infinity-Instruct-7M-Gen-Llama3_1-70B" | ||
model_kwargs: | ||
torch_dtype: 'bfloat16' | ||
trust_remote_code: True | ||
max_new_tokens: 2048 | ||
temperature: 0.7 | ||
do_sample: True | ||
pretty_name: "Infinity-Instruct-7M-0729-Llama3_1-8B" # name in the leaderboard | ||
link: "https://huggingface.co/BAAI/Infinity-Instruct-7M-0729-Llama3_1-8B" # link to the model's repo/information in the leaderboard | ||
pretty_name: "Infinity-Instruct-7M-Gen-Llama3_1-70B" # name in the leaderboard | ||
link: "https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B" # link to the model's repo/information in the leaderboard |
File renamed without changes.
10 changes: 5 additions & 5 deletions
10
...-Instruct-7M-0729-mistral-7B/configs.yaml → ...-Instruct-7M-Gen-Llama3_1-8B/configs.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,13 +1,13 @@ | ||
Infinity-Instruct-7M-0729-mistral-7B: # this should be the same as the name as the current directory | ||
prompt_template: "Infinity-Instruct-7M-0729-mistral-7B/prompt.txt" # what prompt should be used for this model | ||
Infinity-Instruct-7M-Gen-Llama3_1-8B: # this should be the same as the name as the current directory | ||
prompt_template: "Infinity-Instruct-7M-Gen-Llama3_1-8B/prompt.txt" # what prompt should be used for this model | ||
fn_completions: "openai_completions" # what function should be used to generate completions. See `src/alpaca_eval/decoders` for options | ||
completions_kwargs: # parameters to the completion function | ||
model_name: "baai/Infinity-Instruct-7M-0729-mistral-7B" | ||
model_name: "baai/Infinity-Instruct-7M-Gen-Llama3_1-8B" | ||
model_kwargs: | ||
torch_dtype: 'bfloat16' | ||
trust_remote_code: True | ||
max_new_tokens: 2048 | ||
temperature: 0.7 | ||
do_sample: True | ||
pretty_name: "Infinity-Instruct-7M-0729-mistral-7B" # name in the leaderboard | ||
link: "https://huggingface.co/BAAI/Infinity-Instruct-7M-0729-mistral-7B" # link to the model's repo/information in the leaderboard | ||
pretty_name: "Infinity-Instruct-7M-Gen-Llama3_1-8B" # name in the leaderboard | ||
link: "https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-Llama3_1-8B" # link to the model's repo/information in the leaderboard |
File renamed without changes.
13 changes: 13 additions & 0 deletions
13
src/alpaca_eval/models_configs/Infinity-Instruct-7M-Gen-mistral-7B/configs.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
Infinity-Instruct-7M-Gen-mistral-7B: # this should be the same as the name as the current directory | ||
prompt_template: "Infinity-Instruct-7M-Gen-mistral-7B/prompt.txt" # what prompt should be used for this model | ||
fn_completions: "openai_completions" # what function should be used to generate completions. See `src/alpaca_eval/decoders` for options | ||
completions_kwargs: # parameters to the completion function | ||
model_name: "baai/Infinity-Instruct-7M-Gen-mistral-7B" | ||
model_kwargs: | ||
torch_dtype: 'bfloat16' | ||
trust_remote_code: True | ||
max_new_tokens: 2048 | ||
temperature: 0.7 | ||
do_sample: True | ||
pretty_name: "Infinity-Instruct-7M-Gen-mistral-7B" # name in the leaderboard | ||
link: "https://huggingface.co/BAAI/Infinity-Instruct-7M-Gen-mistral-7B" # link to the model's repo/information in the leaderboard |
File renamed without changes.