-
Notifications
You must be signed in to change notification settings - Fork 59.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update openai.ts #5757
Update openai.ts #5757
Conversation
@YINGXINGHUA is attempting to deploy a commit to the NextChat Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThe changes in this pull request focus on the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
app/client/platforms/openai.ts (1)
232-232
: Consider extracting magic numbers and improving documentation.While the max_tokens logic is functional, consider these improvements:
- Extract the magic numbers (1024) to named constants
- Replace the informal comment with proper technical documentation explaining the rationale
+ // Minimum token limits for different model types + const MIN_TOKENS = { + DEFAULT: 1024, + VISION: 4000 + } as const; + // ... - max_tokens: Math.max(modelConfig.max_tokens, 1024), + max_tokens: Math.max(modelConfig.max_tokens, MIN_TOKENS.DEFAULT), - // Please do not ask me why not send max_tokens, no reason, this param is just shit, I dont want to explain anymore. + // Note: Enforcing minimum token limit to ensure sufficient response length
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
app/client/platforms/openai.ts
(1 hunks)
🔇 Additional comments (1)
app/client/platforms/openai.ts (1)
Line range hint 479-481
: LGTM: Improved error handling with proper status checks.
The error handling improvements are well implemented:
- Specific handling for unauthorized access (401)
- Proper validation of both API responses
- Localized error messages
Also applies to: 483-485
这个 应该是之前某个版本的时候,特意这么处理的。暂时不打算打开 |
This should have been deliberately handled this way in a previous version. Not planning to open it yet |
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
📝 补充信息 | Additional Information
Summary by CodeRabbit
New Features
Bug Fixes