-
Notifications
You must be signed in to change notification settings - Fork 59.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Implement Vertex AI support for Anthropic and Google models #5794
base: main
Are you sure you want to change the base?
Conversation
@ryanhex53 is attempting to deploy a commit to the NextChat Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThe changes in this pull request introduce enhancements to the API's handling of authorization and request processing, particularly for integration with Google Cloud's Vertex AI. Modifications include the addition of a Google Cloud token retrieval function, updates to authorization header logic, and adjustments to request body handling to accommodate specific parameters for Vertex AI. The server configuration is also updated to support these changes, allowing for more flexible interactions with various AI services. Changes
Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
🧹 Outside diff range and nitpick comments (10)
app/api/common.ts (2)
Line range hint
11-188
: Consider renaming the function for better maintainability.As the codebase now supports multiple AI providers (OpenAI, Azure, and Vertex AI), the name
requestOpenai
might be misleading. Consider a more generic name likerequestAIProvider
orhandleAIRequest
to better reflect its evolving purpose.
190-190
: Consider potential concurrency implications.The module-level
gTokenClient
variable could face race conditions in a serverless environment where multiple requests might try to initialize it simultaneously. Consider implementing a proper synchronization mechanism or moving the token client to a request-scoped context.app/client/api.ts (2)
320-327
: Consider refactoring provider authentication configurationTo improve maintainability and type safety, consider:
- Creating an enum or const object for auth header names
- Moving provider-specific auth logic to their respective platform files
- Adding TypeScript interfaces for provider auth configurations
Example refactor:
// Define auth header types enum AuthHeaderType { AUTHORIZATION = "Authorization", API_KEY = "api-key", X_API_KEY = "x-api-key", X_GOOG_API_KEY = "x-goog-api-key", } // Provider auth config interface interface ProviderAuthConfig { headerType: AuthHeaderType; requiresBearer: boolean; // Add Vertex AI specific fields if needed } // Move to a central config file const PROVIDER_AUTH_CONFIG: Record<ServiceProvider, ProviderAuthConfig> = { [ServiceProvider.Google]: { headerType: AuthHeaderType.X_GOOG_API_KEY, requiresBearer: false, }, // ... other providers };
320-327
: Simplify provider configuration logicThe current implementation in
getConfig
has multiple boolean flags and complex conditional logic. Consider simplifying it using a more data-driven approach.Example refactor:
interface ProviderConfig { apiKeyGetter: (store: typeof accessStore) => string; // Add other provider-specific config } const PROVIDER_CONFIGS: Record<ServiceProvider, ProviderConfig> = { [ServiceProvider.Google]: { apiKeyGetter: (store) => store.googleApiKey, }, [ServiceProvider.Iflytek]: { apiKeyGetter: (store) => store.iflytekApiKey && store.iflytekApiSecret ? `${store.iflytekApiKey}:${store.iflytekApiSecret}` : "", }, // ... other providers }; function getConfig() { const modelConfig = chatStore.currentSession().mask.modelConfig; const providerName = modelConfig.providerName; const config = PROVIDER_CONFIGS[providerName]; return { apiKey: config.apiKeyGetter(accessStore), isEnabledAccessControl: accessStore.enabledAccessControl(), }; }app/utils/chat.ts (3)
170-170
: Consider refactoring the stream function for better maintainability.The function has many parameters and handles multiple responsibilities (streaming, tool execution, animation). Consider breaking it down into smaller, focused functions using a builder or command pattern.
Example refactor approach:
interface StreamOptions { chatPath: string; requestPayload: any; headers: any; tools: any[]; funcs: Record<string, Function>; controller: AbortController; callbacks: { parseSSE: (text: string, runTools: any[]) => string | undefined; processToolMessage: (requestPayload: any, toolCallMessage: any, toolCallResult: any[]) => void; }; chatOptions: ChatOptions; } class ChatStreamBuilder { private options: Partial<StreamOptions> = {}; withEndpoint(chatPath: string) { this.options.chatPath = chatPath; return this; } withPayload(payload: any) { this.options.requestPayload = payload; return this; } // ... other builder methods build() { return new ChatStream(this.options as StreamOptions); } } class ChatStream { constructor(private options: StreamOptions) {} async start() { // Existing stream logic split into smaller methods } private handleToolExecution() { // Tool execution logic } private handleAnimation() { // Animation logic } }
Line range hint
279-309
: Enhance error handling for Vertex AI integration.The error handling should be enhanced to handle Vertex AI specific error cases and provide more informative error messages to users.
Consider adding specific error handling:
async onopen(res) { clearTimeout(requestTimeoutId); const contentType = res.headers.get("content-type"); console.log("[Request] response content type: ", contentType); responseRes = res; if (contentType?.startsWith("text/plain")) { responseText = await res.clone().text(); return finish(); } if (!res.ok) { const responseTexts = [responseText]; let extraInfo = await res.clone().text(); try { const resJson = await res.clone().json(); extraInfo = prettyObject(resJson); // Add Vertex AI specific error handling if (resJson.error?.code === 'UNAUTHENTICATED') { responseTexts.push(Locale.Error.VertexAIAuth); } else if (resJson.error?.status === 'PERMISSION_DENIED') { responseTexts.push(Locale.Error.VertexAIPermission); } else if (resJson.error?.details?.includes('quota')) { responseTexts.push(Locale.Error.VertexAIQuota); } } catch {} if (res.status === 401) { responseTexts.push(Locale.Error.Unauthorized); } if (extraInfo) { responseTexts.push(extraInfo); } responseText = responseTexts.join("\n\n"); return finish(); } }
Line range hint
266-269
: Consider adjusting request timeout for Vertex AI.The current timeout might need adjustment for Vertex AI latency characteristics. Consider making it configurable based on the model provider.
Example implementation:
const getTimeoutMs = (options: ChatOptions) => { if (options.isVertexAI) { return 120000; // 2 minutes for Vertex AI } return REQUEST_TIMEOUT_MS; // default timeout }; const requestTimeoutId = setTimeout( () => controller.abort(), getTimeoutMs(options) );app/api/anthropic.ts (1)
72-76
: Consider caching the Google Cloud token to improve performanceCurrently,
getGCloudToken()
is called on every request whenserverConfig.isVertexAI
is true. Since the token may have a valid duration, consider caching it to reduce unnecessary token retrieval calls and improve performance.app/utils/gtoken.ts (2)
262-267
: Consider proxy support limitations with 'fetch'The usage of
fetch
in#requestToken
and#revokeTokenAsync
may not handle HTTP proxies (e.g.,HTTPS_PROXY
environment variable), which could lead to issues in environments that require proxy support.Consider using a library or custom implementation that supports HTTP proxies in the edge runtime environment, or handle proxy settings explicitly to ensure the code functions correctly in all deployment environments.
Also applies to: 197-197
275-283
: Preserve original error stack when rethrowing errorsWhen catching an error and rethrowing it as a new
Error
, the original stack trace is lost. This can make debugging more difficult.Consider wrapping the original error to preserve the stack trace:
if (e instanceof Error) { - throw Error("failed to get token: " + e.message); + throw new Error(`Failed to get token: ${e.message}`, { cause: e }); } else { - throw Error("failed to get token: " + String(e)); + throw new Error(`Failed to get token: ${String(e)}`); }Or rethrow the original error after augmenting its message:
if (e instanceof Error) { + e.message = `Failed to get token: ${e.message}`; + throw e; } else { throw new Error(`Failed to get token: ${String(e)}`); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
yarn.lock
is excluded by!**/yarn.lock
,!**/*.lock
📒 Files selected for processing (10)
app/api/anthropic.ts
(4 hunks)app/api/common.ts
(2 hunks)app/api/google.ts
(4 hunks)app/client/api.ts
(1 hunks)app/config/server.ts
(3 hunks)app/utils/chat.ts
(2 hunks)app/utils/gtoken.ts
(1 hunks)app/utils/model.ts
(1 hunks)package.json
(1 hunks)test/model-provider.test.ts
(2 hunks)
🧰 Additional context used
🪛 Biome
app/api/anthropic.ts
[error] 158-158: Avoid the delete operator which can impact performance.
Unsafe fix: Use an undefined assignment instead.
(lint/performance/noDelete)
app/utils/gtoken.ts
[error] 126-126: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
[error] 153-153: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
🔇 Additional comments (13)
test/model-provider.test.ts (1)
1-1
: LGTM: Import statement correctly updated
The import statement has been properly updated to include the new isModelAvailableInServer
function.
package.json (2)
36-36
: LGTM! The jose library addition supports Google Cloud authentication.
The addition of the jose library (^5.9.6) is appropriate for implementing JWT-based authentication with Google Cloud Service Accounts, which is required for the Vertex AI integration.
36-36
: Verify jose library compatibility with the project.
Since jose is a security-critical dependency for handling JWTs, let's verify its compatibility with the project's Node.js version and other dependencies.
✅ Verification successful
jose v5.9.6 is compatible with the project
The verification shows that:
- The jose library is used correctly in
app/utils/gtoken.ts
for JWT operations using modern ES modules and the recommended APIs (SignJWT
,importPKCS8
). - The project's TypeScript configuration targets ES2015+ and uses modern module resolution.
- The jose v5.9.6 is properly resolved in yarn.lock without any conflicting versions.
- The implementation follows jose's best practices for JWT signing and key imports.
The library is being used appropriately for its intended purpose of handling Google OAuth tokens, and there are no compatibility issues detected.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for potential compatibility issues with jose library
# Test 1: Check if there are any direct dependencies on other JWT libraries
echo "Checking for other JWT libraries..."
rg -i "\"jsonwebtoken\":|\"jwt-|\"jose\":" package.json
# Test 2: Check for potential version conflicts in yarn.lock
echo "Checking yarn.lock for jose versions..."
rg "jose@" yarn.lock
# Test 3: Check minimum Node.js version requirement
echo "Checking jose's Node.js version requirement..."
rg "\"node\":" package.json
Length of output: 412
Script:
#!/bin/bash
# Let's gather more specific information about Node.js requirements and usage
# Check package.json for engines field and all its content
echo "Checking package.json engines and related fields..."
rg -A 5 '"engines":' package.json
# Check for actual jose usage in the codebase
echo "Checking jose usage patterns..."
rg -l "from ['|\"]jose['|\"]"
# Look for Node.js version in .nvmrc or similar files
echo "Checking for .nvmrc or similar version files..."
fd -e nvmrc -e node-version
# Check jose's actual implementation to ensure compatibility
echo "Checking jose implementation patterns..."
ast-grep --pattern 'import { $_ } from "jose"'
Length of output: 552
Script:
#!/bin/bash
# Let's check the specific implementation in gtoken.ts and verify Node.js version requirements
# Check the content of gtoken.ts to understand jose usage
echo "Checking jose implementation in gtoken.ts..."
cat app/utils/gtoken.ts
# Check for any TypeScript configuration that might affect compatibility
echo "Checking TypeScript configuration..."
fd -e json -e yaml -e yml -E "node_modules" | grep -i "tsconfig"
cat $(fd -e json -e yaml -e yml -E "node_modules" | grep -i "tsconfig") 2>/dev/null
# Check for any deployment configuration files that might specify Node.js version
echo "Checking deployment configuration files..."
fd -e yaml -e yml -e json -E "node_modules" | grep -i "docker\|deploy\|workflow"
Length of output: 9175
Script:
#!/bin/bash
# Let's check the Docker configuration and Node.js version requirements
# Check Docker configuration for Node.js version
echo "Checking Docker configuration..."
cat docker-compose.yml
# Check for any GitHub workflow files that might specify Node.js version
echo "Checking GitHub workflows..."
fd -e yml -e yaml -p .github/workflows/
# Let's also check package-lock.json or yarn.lock for jose's peer dependencies
echo "Checking jose peer dependencies..."
rg -A 5 "jose@\^5.9.6" yarn.lock
Length of output: 1756
app/utils/model.ts (2)
Line range hint 17-22
: Consider enhancing provider handling for Vertex AI integration
The customProvider
function currently uses a generic "custom" provider type. For better maintainability and type safety with the new Vertex AI integration, consider:
- Adding specific provider types for Vertex AI
- Including provider-specific validation logic
- Adding documentation for Vertex AI URL format requirements
Consider this enhancement:
+ type ProviderType = "custom" | "vertex_ai_google" | "vertex_ai_anthropic";
+
const customProvider = (providerName: string) => ({
id: providerName.toLowerCase(),
providerName: providerName,
- providerType: "custom",
+ providerType: providerName.toLowerCase().includes("vertex") ?
+ `vertex_ai_${providerName.toLowerCase().split("_")[1]}` as ProviderType :
+ "custom" as ProviderType,
sorted: CustomSeq.next(providerName),
});
Let's check current provider handling:
#!/bin/bash
# Search for provider type usage to understand impact
rg "providerType.*=.*"
203-203
:
Critical: Fix inverted logic and case sensitivity in model availability check
The isModelAvailableInServer
function has multiple issues that need to be addressed:
- The function returns
true
only whenavailable
is strictlyfalse
, which inverts the intended logic - The case sensitivity issue with
providerName
prevents matching providers
Apply this fix to correct both issues:
- return modelTable[fullName]?.available === false;
+ const normalizedFullName = `${modelName}@${providerName.toLowerCase()}`;
+ return modelTable[normalizedFullName]?.available !== false;
Let's verify the impact of this case sensitivity issue:
Would you like me to generate comprehensive test cases for this function, especially covering the new Vertex AI integration scenarios?
app/api/common.ts (1)
5-5
: LGTM: Import statement is correctly placed and necessary.
The GoogleToken import is required for the new Vertex AI authentication functionality.
app/config/server.ts (2)
155-155
: 🛠️ Refactor suggestion
Add validation for Vertex AI configuration.
The configuration looks well-organized, but consider adding validation:
- URL validation:
if (process.env.VERTEX_AI_URL && !process.env.VERTEX_AI_URL.includes('aiplatform.googleapis.com')) {
console.warn('[Server Config] Invalid Vertex AI URL format');
}
- JSON key validation:
if (process.env.GOOGLE_CLOUD_JSON_KEY) {
try {
const parsed = JSON.parse(process.env.GOOGLE_CLOUD_JSON_KEY);
if (!parsed.type || !parsed.project_id || !parsed.private_key) {
console.warn('[Server Config] Invalid Google Cloud service account key format');
}
} catch {
console.warn('[Server Config] Failed to parse Google Cloud service account key');
}
}
Also applies to: 199-202
47-50
: Consider enhancing security and documentation for Vertex AI configuration.
-
Security considerations:
- Consider supporting file-based configuration for
GOOGLE_CLOUD_JSON_KEY
to avoid storing sensitive credentials in environment variables. - Add a note about securing these credentials in production environments.
- Consider supporting file-based configuration for
-
Documentation improvements:
- Break down the URL template into readable parts:
Format: https://{location}-aiplatform.googleapis.com/v1/projects/{project}/locations/{location}/models/{model}/versions/{version}:predict Example: https://us-central1-aiplatform.googleapis.com/v1/projects/my-project/locations/us-central1/models/chat-bison/versions/001:predict
- Add a note about wrapping
GOOGLE_CLOUD_JSON_KEY
in single quotes as mentioned in PR description.
- Break down the URL template into readable parts:
app/client/api.ts (1)
320-327
: Ensure access code compatibility with Vertex AI authentication
The fix for sending access codes with non-standard authorization headers is logically sound. However, we should verify that this doesn't interfere with Vertex AI authentication flow.
Let's verify the authentication flow:
✅ Verification successful
The access code handling is compatible with Vertex AI authentication
The verification shows that:
- Vertex AI uses a dedicated authentication flow with Google Cloud tokens, explicitly setting
authHeaderName = "Authorization"
- The access code condition in the original snippet correctly excludes cases where
authHeader === "Authorization"
, preventing interference with Vertex AI's auth flow - The changes maintain proper separation between access code authentication and provider-specific authentication
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any potential auth header conflicts in the codebase
# Look for other places where Authorization headers are set for Vertex AI
rg -l "Authorization.*header" --type ts
# Check for any Vertex AI specific authentication handling
rg "VERTEX_AI" --type ts
Length of output: 477
Script:
#!/bin/bash
# Let's examine the authentication logic more closely
# Check how Vertex AI authentication is implemented
rg -A 5 -B 5 "vertexAIUrl|VERTEX_AI" --type ts
# Look for any provider-specific header logic
ast-grep --pattern 'if ($provider === $_) {
$$$
}'
# Check for access code related logic
rg -A 3 "accessCode" --type ts
Length of output: 7944
app/utils/chat.ts (2)
6-6
: LGTM! Type-safe import addition.
The addition of ChatOptions
type import improves type safety and maintains proper separation of concerns.
170-170
: Verify the impact of ChatOptions type change.
The change from any
to ChatOptions
improves type safety but could be a breaking change. Let's verify all callers are updated.
✅ Verification successful
Let me verify the actual type change by looking at the function definition and its callers.
Let me try a different approach to find the type definition and usage.
Let me search for the ChatOptions type definition with a different approach.
Based on the gathered information, I can now provide the final response:
The ChatOptions type change is safe to proceed.
The change from any
to ChatOptions
is a type-safe improvement that doesn't introduce breaking changes because:
- The
ChatOptions
interface is properly defined inapp/client/api.ts
with all necessary properties - All callers of the
stream
function are platform implementations (anthropic, moonshot, glm, google, xai, openai) that already import and use theChatOptions
type - The type change enforces a consistent structure across all platform implementations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all calls to the stream function
rg -l 'stream\(' | xargs rg -A 5 'stream\('
Length of output: 2592
Script:
#!/bin/bash
# Get the stream function definition and its type
ast-grep --pattern 'export function stream($$$)'
# Get the ChatOptions type definition
rg -A 10 'type ChatOptions'
Length of output: 80
Script:
#!/bin/bash
# Get the content of the chat.ts file to see the full context
cat app/utils/chat.ts
# Search for ChatOptions interface/type in all TypeScript files
rg -t typescript 'interface ChatOptions|type ChatOptions'
Length of output: 10415
Script:
#!/bin/bash
# Search for ChatOptions in the api.ts file since it's imported from there
cat app/client/api.ts
# Also search for files importing ChatOptions
rg 'ChatOptions.*from'
Length of output: 9967
app/api/google.ts (2)
35-35
: Correctly updated API key validation for Vertex AI usage
The conditional check now allows requests without an API key when serverConfig.isVertexAI
is true. This adjustment correctly accommodates the use of GCloud tokens with Vertex AI, ensuring that API key validation does not block legitimate requests in this context.
79-81
: Appropriate adjustment of the request path for Vertex AI
When serverConfig.vertexAIUrl
is provided, setting path
to an empty string ensures that the full URL is used directly without additional path concatenation. This is appropriate for handling Vertex AI endpoints that require the complete URL.
describe("isModelAvailableInServer", () => { | ||
test("works when model null", () => { | ||
const jsonBody = JSON.parse("{}") as { model?: string }; | ||
const result = isModelAvailableInServer( | ||
"gpt-3.5-turbo@OpenAI", | ||
jsonBody.model as string, | ||
"OpenAI", | ||
); | ||
expect(result).toBe(false); | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
🛠️ Refactor suggestion
Based on the implementation findings, I can now provide a final response:
Test suite needs correction due to incorrect function behavior
The current test suite is testing the wrong behavior. The implementation shows that isModelAvailableInServer
has a critical issue:
- The function is meant to check if a model is NOT available (returns true when model is blocked)
- There's a known case sensitivity issue in the implementation (commented in the code)
- The function always returns false due to case sensitivity mismatch
The test suite should be rewritten to match the actual implementation:
describe("isModelAvailableInServer", () => {
test("should return false when server model is undefined", () => {
const jsonBody = {} as { model?: string };
expect(isModelAvailableInServer("gpt-3.5-turbo@OpenAI", jsonBody.model, "OpenAI")).toBe(false);
});
test("should return false due to case sensitivity mismatch", () => {
// This test documents the current broken behavior where the function always returns false
// due to case sensitivity mismatch between providerName and modelTable keys
expect(isModelAvailableInServer("gpt-3.5-turbo@OpenAI", "gpt-3.5-turbo", "OpenAI")).toBe(false);
});
});
The function itself needs to be fixed before adding more test cases, as it currently has a critical case sensitivity bug that makes it non-functional.
🔗 Analysis chain
Enhance test coverage and improve test clarity
The current test suite for isModelAvailableInServer
has several areas for improvement:
- The test name "works when model null" is misleading as you're testing with
undefined
(fromjsonBody.model
), notnull
- The test coverage is insufficient for a function that's critical for Vertex AI support
- The known case sensitivity issue isn't being tested
Consider expanding the test suite with these cases:
describe("isModelAvailableInServer", () => {
test("should return false when server model is undefined", () => {
const jsonBody = {} as { model?: string };
expect(isModelAvailableInServer("gpt-3.5-turbo@OpenAI", jsonBody.model, "OpenAI")).toBe(false);
});
test("should handle case sensitivity in provider names", () => {
expect(isModelAvailableInServer("gemini-pro@VertexAI", "gemini-pro", "vertexai")).toBe(true);
expect(isModelAvailableInServer("claude-3@VertexAI", "claude-3", "VERTEXAI")).toBe(true);
});
test("should validate Vertex AI model formats", () => {
expect(isModelAvailableInServer("gemini-pro@VertexAI", "gemini-pro", "VertexAI")).toBe(true);
expect(isModelAvailableInServer("claude-3@VertexAI", "claude-3", "VertexAI")).toBe(true);
expect(isModelAvailableInServer("invalid-model@VertexAI", "gemini-pro", "VertexAI")).toBe(false);
});
test("should handle model-provider matching", () => {
expect(isModelAvailableInServer("gemini-pro@VertexAI", "gemini-pro", "OpenAI")).toBe(false);
expect(isModelAvailableInServer("gemini-pro@OpenAI", "gemini-pro", "VertexAI")).toBe(false);
});
});
Let's verify the actual implementation to ensure our test cases align with the function's behavior:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Find the implementation of isModelAvailableInServer
ast-grep --pattern 'function isModelAvailableInServer($_) {
$$$
}'
Length of output: 70
Script:
#!/bin/bash
# Let's try a different approach to find the implementation
# First, let's find the file containing the function
rg "isModelAvailableInServer" --type ts -l
# Then let's see the actual implementation with context
rg "isModelAvailableInServer" --type ts -A 10
Length of output: 12018
Script:
#!/bin/bash
# Let's look at the ServiceProvider enum and collectModelTable function
rg "enum ServiceProvider" --type ts -A 20
# And let's see the collectModelTable implementation
rg "function collectModelTable" --type ts -A 10
Length of output: 2085
Script:
#!/bin/bash
# Let's see the actual implementation of isModelAvailableInServer and its dependencies
rg "export function isModelAvailableInServer" -A 10 app/utils/model.ts
# And let's look at the DEFAULT_MODELS constant
rg "DEFAULT_MODELS.*=" --type ts -A 5
Length of output: 1111
export async function getGCloudToken() { | ||
if (!gTokenClient) { | ||
if (!serverConfig.googleCloudJsonKey) | ||
throw new Error("GOOGLE_CLOUD_JSON_KEY is not set"); | ||
const keys = JSON.parse(serverConfig.googleCloudJsonKey); | ||
gTokenClient = new GoogleToken({ | ||
email: keys.client_email, | ||
key: keys.private_key, | ||
scope: ["https://www.googleapis.com/auth/cloud-platform"], | ||
}); | ||
} | ||
const credentials = await gTokenClient?.getToken(); | ||
return credentials?.access_token; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add robust error handling and type safety.
The current implementation has several potential issues:
- The
JSON.parse
call could throw on invalid JSON - No validation of the parsed JSON structure
- Potential undefined access in
gTokenClient?.getToken()
- No explicit handling of token expiration
Consider applying these improvements:
export async function getGCloudToken() {
if (!gTokenClient) {
if (!serverConfig.googleCloudJsonKey)
throw new Error("GOOGLE_CLOUD_JSON_KEY is not set");
- const keys = JSON.parse(serverConfig.googleCloudJsonKey);
+ let keys;
+ try {
+ keys = JSON.parse(serverConfig.googleCloudJsonKey);
+ if (!keys.client_email || !keys.private_key) {
+ throw new Error("Invalid key structure");
+ }
+ } catch (error) {
+ throw new Error("Invalid GOOGLE_CLOUD_JSON_KEY format");
+ }
gTokenClient = new GoogleToken({
email: keys.client_email,
key: keys.private_key,
scope: ["https://www.googleapis.com/auth/cloud-platform"],
});
}
- const credentials = await gTokenClient?.getToken();
- return credentials?.access_token;
+ if (!gTokenClient) {
+ throw new Error("Failed to initialize Google Token client");
+ }
+ const credentials = await gTokenClient.getToken();
+ if (!credentials?.access_token) {
+ throw new Error("Failed to obtain access token");
+ }
+ return credentials.access_token;
}
Also, consider adding JSDoc type definitions for the expected JSON structure:
interface GoogleCloudKey {
client_email: string;
private_key: string;
// add other required fields
}
gTokenClient = new GoogleToken({ | ||
email: keys.client_email, | ||
key: keys.private_key, | ||
scope: ["https://www.googleapis.com/auth/cloud-platform"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review the requested OAuth scope.
The current scope https://www.googleapis.com/auth/cloud-platform
provides broad access to Google Cloud Platform resources. Consider using more specific scopes that align with the minimum permissions required for Vertex AI operations.
For Vertex AI, you might only need:
https://www.googleapis.com/auth/cloud-platform.read-only
(if only calling models)https://www.googleapis.com/auth/aiplatform
(if using other Vertex AI features)
let authHeaderName = "x-goog-api-key"; | ||
let authValue = | ||
req.headers.get(authHeaderName) || | ||
(req.headers.get("Authorization") ?? "").replace("Bearer ", ""); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Robust extraction of the token from the 'Authorization' header
The current token extraction might incorrectly handle 'Authorization' headers that do not start with 'Bearer '. To ensure accurate token retrieval, consider refining the extraction logic.
Apply this diff to improve token extraction:
let authValue =
req.headers.get(authHeaderName) ||
- (req.headers.get("Authorization") ?? "").replace("Bearer ", "");
+ (req.headers.get("Authorization") ?? "").replace(/^Bearer\s+/i, "");
This change uses a regular expression to remove the 'Bearer ' prefix only if it appears at the start of the string, regardless of case and handling any extra whitespace. This prevents unintended replacements elsewhere in the header value.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let authHeaderName = "x-goog-api-key"; | |
let authValue = | |
req.headers.get(authHeaderName) || | |
(req.headers.get("Authorization") ?? "").replace("Bearer ", ""); | |
let authHeaderName = "x-goog-api-key"; | |
let authValue = | |
req.headers.get(authHeaderName) || | |
(req.headers.get("Authorization") ?? "").replace(/^Bearer\s+/i, ""); |
const fetchUrl = serverConfig.vertexAIUrl | ||
? serverConfig.vertexAIUrl | ||
: `${baseUrl}${path}${ | ||
req?.nextUrl?.searchParams?.get("alt") === "sse" ? "?alt=sse" : "" | ||
}`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Append query parameters to 'vertexAIUrl' when necessary
When using serverConfig.vertexAIUrl
, the fetchUrl
does not include the query parameter for alt=sse
, which may be required for certain functionalities like streaming responses. Adjusting the code to append this parameter when present in the request ensures consistent behavior.
Apply this diff to include the query parameter:
const fetchUrl = serverConfig.vertexAIUrl
? serverConfig.vertexAIUrl +
+ (req?.nextUrl?.searchParams?.get("alt") === "sse" ? "?alt=sse" : "")
: `${baseUrl}${path}${
req?.nextUrl?.searchParams?.get("alt") === "sse" ? "?alt=sse" : ""
}`;
This modification appends ?alt=sse
to vertexAIUrl
when the request's search parameters include alt=sse
, matching the behavior of the non-Vertex AI URL handling.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const fetchUrl = serverConfig.vertexAIUrl | |
? serverConfig.vertexAIUrl | |
: `${baseUrl}${path}${ | |
req?.nextUrl?.searchParams?.get("alt") === "sse" ? "?alt=sse" : "" | |
}`; | |
const fetchUrl = serverConfig.vertexAIUrl | |
? serverConfig.vertexAIUrl + | |
(req?.nextUrl?.searchParams?.get("alt") === "sse" ? "?alt=sse" : "") | |
: `${baseUrl}${path}${ | |
req?.nextUrl?.searchParams?.get("alt") === "sse" ? "?alt=sse" : "" | |
}`; |
iss?: string; | ||
sub?: string; | ||
scope?: string | string[]; | ||
additionalClaims?: Record<string | number | symbol, never>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect type for 'additionalClaims' property
The additionalClaims
property is currently typed as Record<string | number | symbol, never>
, which implies that the object cannot have any properties since never
is an uninhabitable type. This contradicts the intended use of additionalClaims
to include additional JWT claims.
Consider changing the type to allow properties with values of type unknown
or a more specific type that matches the expected claim values.
Apply this diff to fix the type definitions:
- additionalClaims?: Record<string | number | symbol, never>;
+ additionalClaims?: Record<string | number | symbol, unknown>;
Also applies to: 73-73
const url = GOOGLE_REVOKE_TOKEN_URL + this.accessToken; | ||
// await axios.get(url, { timeout: 10000 }); | ||
// uncomment below if prefer using fetch, but fetch will not follow HTTPS_PROXY | ||
await fetch(url, { method: "GET" }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for the fetch request in '#revokeTokenAsync'
In the #revokeTokenAsync
method, the fetch request doesn't check for errors in the response. This might cause silent failures if the revoke request fails.
Modify the code to handle the response:
- await fetch(url, { method: "GET" });
+ const res = await fetch(url, { method: "GET" });
+ if (!res.ok) {
+ throw new Error(`Failed to revoke token with status ${res.status} - ${res.statusText}`);
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
await fetch(url, { method: "GET" }); | |
const res = await fetch(url, { method: "GET" }); | |
if (!res.ok) { | |
throw new Error(`Failed to revoke token with status ${res.status} - ${res.statusText}`); | |
} |
} | ||
|
||
try { | ||
return await (this.#inFlightRequest = this.#getTokenAsyncInner(opts)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid assignment within an expression
Assigning this.#inFlightRequest
inside the await
expression can be confusing and may lead to unintended side effects. It's clearer to separate the assignment from the await
for better readability and maintainability.
Apply this diff to separate the assignment:
- return await (this.#inFlightRequest = this.#getTokenAsyncInner(opts));
+ this.#inFlightRequest = this.#getTokenAsyncInner(opts);
+ return await this.#inFlightRequest;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return await (this.#inFlightRequest = this.#getTokenAsyncInner(opts)); | |
this.#inFlightRequest = this.#getTokenAsyncInner(opts); | |
return await this.#inFlightRequest; |
🧰 Tools
🪛 Biome
[error] 153-153: The assignment should not be in an expression.
The use of assignments in expressions is confusing.
Expressions are often considered as side-effect free.
(lint/suspicious/noAssignInExpressions)
getToken( | ||
callback?: GetTokenCallback | GetTokenOptions, | ||
opts = {} as GetTokenOptions, | ||
): void | Promise<TokenData | undefined> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid using 'void' in union types
Using void
in a union type can be confusing because void
represents the absence of a value. In the return type of the getToken
method, replace void
with undefined
to make the type clearer and to comply with TypeScript best practices.
Apply this diff to adjust the return type:
- ): void | Promise<TokenData | undefined> {
+ ): Promise<TokenData | undefined> | undefined {
Alternatively, consider refactoring the method overloads to separate return types explicitly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
): void | Promise<TokenData | undefined> { | |
): Promise<TokenData | undefined> | undefined { |
🧰 Tools
🪛 Biome
[error] 126-126: void is confusing inside a union type.
Unsafe fix: Use undefined instead.
(lint/suspicious/noConfusingVoidType)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
app/layout.tsx (1)
69-69
: Consider making Analytics component conditional.Unlike other analytics components that are conditionally rendered based on server configuration, the Analytics component is rendered unconditionally. This could lead to unnecessary script loading if Vercel analytics isn't needed.
Consider wrapping it in a condition similar to other analytics components:
- <Analytics /> + {serverConfig?.isVercel && ( + <Analytics /> + )}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
yarn.lock
is excluded by!**/yarn.lock
,!**/*.lock
📒 Files selected for processing (2)
app/layout.tsx
(2 hunks)package.json
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- package.json
🔇 Additional comments (2)
app/layout.tsx (2)
10-10
: LGTM!
The import statement follows the existing pattern and is properly placed with other third-party imports.
Line range hint 1-73
: Verify PR scope alignment.
The changes in this file appear to be focused on analytics integration, which seems unrelated to the PR's stated objective of implementing Vertex AI support for Anthropic and Google models. Consider splitting these changes into a separate PR focused on analytics improvements.
Let's verify if there are any connections to Vertex AI implementation:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
.env.template (1)
71-74
: Enhance documentation for VERTEX_AI_URL.While the basic description is provided, it would be helpful to include:
- Example URL formats for both Google and Anthropic models
- The requirement to append
?alt=sse
for Gemini models (as mentioned in PR objectives)Apply this diff to improve the documentation:
# (optional) # Default: Empty -# Google Cloud Vertex AI full url, set if you want to use Vertex AI. +# Google Cloud Vertex AI full url for accessing Google or Anthropic models. +# Examples: +# - For Google models: https://us-central1-aiplatform.googleapis.com/v1/projects/YOUR_PROJECT/locations/us-central1/publishers/google/models/gemini-pro:streamGenerateContent?alt=sse +# - For Anthropic models: https://us-central1-aiplatform.googleapis.com/v1/projects/YOUR_PROJECT/locations/us-central1/publishers/anthropic/models/claude-3-sonnet:predict VERTEX_AI_URL=
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
.env.template
(1 hunks)
🔇 Additional comments (2)
.env.template (2)
69-70
: LGTM: Proper formatting with newline.
The added newline maintains consistent spacing between configuration sections.
71-79
: Verify the integration with existing Google API configurations.
The file already contains GOOGLE_API_KEY
and GOOGLE_URL
configurations. We should ensure users understand when to use which configuration.
Let's verify if there are any other Google-related configurations in the codebase:
Consider adding a comment block explaining the relationship between different Google-related configurations:
+# Google Cloud Integration Options:
+# 1. For Gemini Pro API direct access: Use GOOGLE_API_KEY and GOOGLE_URL
+# 2. For Vertex AI access (supports both Google and Anthropic models):
+# Use VERTEX_AI_URL and GOOGLE_CLOUD_JSON_KEY
+
# (optional)
# Default: Empty
# Google Cloud Vertex AI full url...
✅ Verification successful
Let me gather more information about the actual usage of these configurations to better understand their relationships.
Based on the gathered information, I can now provide a final response:
Add clarifying comments to distinguish between Google API configurations
The codebase shows two distinct Google integration paths that should be clearly documented:
- Direct Gemini Pro API access using
GOOGLE_API_KEY
andGOOGLE_URL
- Vertex AI access using
VERTEX_AI_URL
andGOOGLE_CLOUD_JSON_KEY
(supports both Google and Anthropic models)
+# Google Cloud Integration Options:
+# 1. For Gemini Pro API direct access:
+# - Use GOOGLE_API_KEY and GOOGLE_URL
+# - Provides access to Gemini models (gemini-pro, gemini-pro-vision, etc.)
+# - Simpler setup with API key authentication
+#
+# 2. For Vertex AI access:
+# - Use VERTEX_AI_URL and GOOGLE_CLOUD_JSON_KEY
+# - Supports both Google and Anthropic models through Google Cloud
+# - Requires Google Cloud service account and Vertex AI setup
+
# (optional)
# Default: Empty
# Google Cloud Vertex AI full url...
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for Google-related environment variables and their usage
rg -i "google.*_(api|url|key)" --type ts
Length of output: 1402
Script:
#!/bin/bash
# Search for Vertex AI related code
rg -i "vertex.*ai|vertexai" --type ts -A 5
# Search for Gemini related code
rg -i "gemini" --type ts -A 5
Length of output: 19375
# Default: Empty | ||
# Text content of Google Cloud service account JSON key, set if you want to use Vertex AI. | ||
GOOGLE_CLOUD_JSON_KEY='' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add security warning and format guidance for GOOGLE_CLOUD_JSON_KEY.
The service account key requires specific formatting and contains sensitive information. This should be clearly documented.
Apply this diff to add security guidance:
# (optional)
# Default: Empty
-# Text content of Google Cloud service account JSON key, set if you want to use Vertex AI.
+# Text content of Google Cloud service account JSON key for Vertex AI authentication.
+# SECURITY WARNING: This is sensitive credential data. Ensure it's properly secured and never commit it to version control.
+# NOTE: The content must be wrapped in single quotes due to multi-line strings and double quotes.
+# Example: GOOGLE_CLOUD_JSON_KEY='{"type": "service_account", ... }'
GOOGLE_CLOUD_JSON_KEY=''
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# (optional) | |
# Default: Empty | |
# Text content of Google Cloud service account JSON key, set if you want to use Vertex AI. | |
GOOGLE_CLOUD_JSON_KEY='' | |
# (optional) | |
# Default: Empty | |
# Text content of Google Cloud service account JSON key for Vertex AI authentication. | |
# SECURITY WARNING: This is sensitive credential data. Ensure it's properly secured and never commit it to version control. | |
# NOTE: The content must be wrapped in single quotes due to multi-line strings and double quotes. | |
# Example: GOOGLE_CLOUD_JSON_KEY='{"type": "service_account", ... }' | |
GOOGLE_CLOUD_JSON_KEY='' |
I am interested in this functionality. any idea when the PR will be completed? |
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
增加利用Google Cloud Vertex AI平台对Google和Anthropic模型的支持
📝 补充信息 | Additional Information
操作指南:
VERTEX_AI_URL
环境变量,指向 Vertex AI 的 Endpoint 地址,也可以使用 Cloudflare AI Gateway 对它进行代理?alt=sse
GOOGLE_CLOUD_JSON_KEY
环境变量,将 Google Cloud 里面的 Service Account 分配的 JSON 密钥内容贴上'
进行包裹。VERTEX_AI_URL
Endpoint 是 Google 的模型,就任选 Google 的一个模型,如果是 Claude 模型就任选 Anthropic 的模型CUSTOM_MODELS
结合DEFAULT_MODEL
环境变量进行默认指定.env.local 环境变量定义样列
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
.env.template
to include new optional configuration variables for Google Cloud services.Tests
isModelAvailableInServer
function to ensure accurate behavior.Chores
package.json
to include a new dependency for improved functionality.