Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add gemini-exp-1206, gemini-2.0-flash-thinking-exp-1219 #5958

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion app/client/platforms/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ export class ChatGPTApi implements LLMApi {
}

// add max_tokens to vision model
if (visionModel) {
if (visionModel && modelConfig.model !== "glm-4v-flash") {
requestPayload["max_tokens"] = Math.max(modelConfig.max_tokens, 4000);
}
}
Expand Down
9 changes: 9 additions & 0 deletions app/constant.ts
Original file line number Diff line number Diff line change
Expand Up @@ -325,6 +325,15 @@ const googleModels = [
"gemini-exp-1121",
"learnlm-1.5-pro-experimental",
"gemini-2.0-flash-exp",
"gemini-1.5-flash",
"gemini-1.5-flash-002",
"gemini-1.5-flash-8b",
"gemini-1.5-flash-exp-0827",
"gemini-1.5-pro",
"gemini-1.5-pro-002",
"gemini-1.5-pro-exp-0827",
"gemini-2.0-flash-thinking-exp-1219",
"gemini-exp-1206",
Comment on lines +328 to +336
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add knowledge cutoff dates for new Gemini models

The new Gemini models have been added, but their knowledge cutoff dates are not defined in the KnowledgeCutOffDate object. This could lead to incorrect cutoff date information being displayed to users.

Add the following entries to the KnowledgeCutOffDate object:

export const KnowledgeCutOffDate: Record<string, string> = {
  // ... existing entries ...
+ "gemini-1.5-flash": "2024-04",
+ "gemini-1.5-flash-002": "2024-04",
+ "gemini-1.5-flash-8b": "2024-04",
+ "gemini-1.5-flash-exp-0827": "2024-04",
+ "gemini-1.5-pro": "2024-04",
+ "gemini-1.5-pro-002": "2024-04",
+ "gemini-1.5-pro-exp-0827": "2024-04",
+ "gemini-2.0-flash-thinking-exp-1219": "2024-04",
+ "gemini-exp-1206": "2024-04",
};

Committable suggestion skipped: line range outside the PR's diff.

];

const anthropicModels = [
Expand Down
1 change: 1 addition & 0 deletions app/utils.ts
Original file line number Diff line number Diff line change
Expand Up @@ -265,6 +265,7 @@ export function isVisionModel(model: string) {
"learnlm",
"qwen-vl",
"qwen2-vl",
"glm-4v",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ’‘ Codebase verification

GLM-4V model is not properly integrated in the codebase

The codebase shows inconsistencies in GLM-4V support:

  • "glm-4v" is listed in vision keywords but not in the chatglmModels array
  • There's a reference to "glm-4v-flash" in OpenAI platform handling, but it's not properly defined
  • The vision model detection doesn't properly account for GLM-4V variants
πŸ”— Analysis chain

LGTM! Verify GLM-4V vision capabilities.

The addition of "glm-4v" to the vision keywords list is consistent with the model's capabilities.

Let's verify the GLM-4V model's vision capabilities and its usage in the codebase:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for GLM-4V model usage and configuration
rg -A 5 "glm-4v" --type ts --type tsx

Length of output: 65


Script:

#!/bin/bash
# Search for GLM-4V model usage and configuration with correct file type
rg -A 5 "glm-4v"

# Also check for any configuration or constant files
fd -e ts -e js "config|const" --exec rg -l "glm"

# Check for vision-related functionality
rg -i "vision.*model|image.*model"

Length of output: 2209


Script:

#!/bin/bash
# Check the vision keywords and GLM model configuration
cat app/constant.ts

# Check the vision model implementation
cat app/utils.ts | grep -A 10 "isVisionModel"

# Check GLM-4V specific handling
cat app/client/platforms/openai.ts | grep -A 10 "glm-4v"

Length of output: 15225

];
const isGpt4Turbo =
model.includes("gpt-4-turbo") && !model.includes("preview");
Expand Down
Loading