-
Notifications
You must be signed in to change notification settings - Fork 59.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introducing the ability to remove specified Providers. #5992
base: main
Are you sure you want to change the base?
Conversation
@QAbot-zh is attempting to deploy a commit to the NextChat Team on Vercel. A member of the Team first needs to authorize it. |
Warning Rate limit exceeded@QAbot-zh has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 14 minutes and 28 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughThe pull request introduces comprehensive updates to the NextChat application's README documentation and model management utility. The changes focus on expanding environment variable configurations, providing more detailed instructions for model deployment, and enhancing the flexibility of model selection. The README now includes extensive documentation for various API keys, model customization options, and deployment settings across multiple providers like OpenAI, Azure, Google, Anthropic, and others. Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
既然改到这个地方了,如果方便可以针对此函数加一个单元测试吗 |
Now that we have changed it to this place, if it is convenient, can we add a unit test for this function? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
test/model.test.ts (2)
79-79
: Correct the minor typo in the test description.The word "shoule" should be changed to "should" for accuracy.
- test('all models shoule be available', () => { + test('all models should be available', () => {
56-56
: Remove the empty array slot to avoid sparse arrays.Line 56 has a trailing comma that introduces an empty slot in
mockModels
. Although JavaScript allows sparse arrays, it can lead to unintended behavior. Consider removing the trailing comma or using a placeholder value (undefined
or similar) intentionally.56- , + // Remove or replace with a placeholder if needed
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/model.test.ts
(1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
test/model.test.ts
[error] 5-77: This array contains an empty slot.
Unsafe fix: Replace hole with undefined
(lint/suspicious/noSparseArray)
已添加该函数的单元测试 |
A unit test for the function has been added |
Some tests failed |
Your build has completed! |
模型名称抄错了,已更正 |
The model name was copied incorrectly and has been corrected. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
test/model.test.ts (2)
78-78
: Fix typo in test descriptionChange "shoule" to "should" in the test description.
- test('all models shoule be available', () => { + test('should have all models available', () => {
129-141
: Update test description and add validation for invalid patterns
- The test description doesn't match the implementation as it includes an Azure model (
gpt-4o@azure
).- Consider adding tests for invalid filter patterns.
- test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => { + test('should exclude all models except OpenAI and specific Azure models when custom is "-all, +*openai, gpt-4o@azure"', () => {Add a new test for invalid patterns:
test('should handle invalid filter patterns gracefully', () => { const invalidPatterns = [ '*invalid*pattern', 'missing-operator*azure', '-incomplete+' ]; invalidPatterns.forEach(pattern => { const result = collectModelTable(mockModels, pattern); // All models should remain available with invalid patterns expect(result['gpt-3.5-turbo@openai'].available).toBe(true); expect(result['gpt-4@openai'].available).toBe(true); // ... add assertions for other models }); });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/model.test.ts
(1 hunks)
🔇 Additional comments (1)
test/model.test.ts (1)
4-142
: Well-structured test suite with good coverage
The test suite effectively validates the provider filtering functionality with comprehensive coverage of different scenarios. It includes:
- Default behavior
- Complete exclusion
- Provider-specific exclusion
- Multiple provider exclusion
- Complex filter combinations
This addresses the PR objectives and provides good test coverage for the core functionality.
test/model.test.ts
Outdated
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | ||
expect(result['gpt-4@openai'].available).toBe(true); | ||
expect(result['gpt-3.5@azure'].available).toBe(true); | ||
expect(result['gpt-4@azure'].available).toBe(true); | ||
expect(result['gemini-pro@google'].available).toBe(true); | ||
expect(result['claude-3-haiku-20240307@claude'].available).toBe(true); | ||
expect(result['grok-beta@xai'].available).toBe(true); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix inconsistencies between mock data and test assertions
The model identifiers in the test assertions don't match the mock data:
- Mock data has
gpt-3.5-turbo
but assertion usesgpt-3.5@azure
- Mock data has
claude-3-haiku-20240307@anthropic
but assertion usesclaude-3-haiku-20240307@claude
Apply this fix to ensure consistency:
- expect(result['gpt-3.5@azure'].available).toBe(true);
+ expect(result['gpt-3.5-turbo@azure'].available).toBe(true);
- expect(result['claude-3-haiku-20240307@claude'].available).toBe(true);
+ expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | |
expect(result['gpt-4@openai'].available).toBe(true); | |
expect(result['gpt-3.5@azure'].available).toBe(true); | |
expect(result['gpt-4@azure'].available).toBe(true); | |
expect(result['gemini-pro@google'].available).toBe(true); | |
expect(result['claude-3-haiku-20240307@claude'].available).toBe(true); | |
expect(result['grok-beta@xai'].available).toBe(true); | |
}); | |
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | |
expect(result['gpt-4@openai'].available).toBe(true); | |
expect(result['gpt-3.5-turbo@azure'].available).toBe(true); | |
expect(result['gpt-4@azure'].available).toBe(true); | |
expect(result['gemini-pro@google'].available).toBe(true); | |
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true); | |
expect(result['grok-beta@xai'].available).toBe(true); | |
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
test/model.test.ts (3)
1-76
: Consider improving mock data organization and type safety.The mock data setup is comprehensive, but consider these improvements:
- Move the
mockModels
array to a separate test fixture file for reusability across test files- Replace
as LLMModelProvider
type assertions with proper object creation- } as LLMModelProvider, + }: LLMModelProvider = { + id: 'openai', + providerName: 'OpenAI', + providerType: 'openai', + },
78-89
: Fix typo in test name.There's a spelling error in the test name: "shoule" should be "should"
- test('all models shoule be available', () => { + test('all models should be available', () => {
103-127
: Consider using test.each for provider filter tests.These provider-specific filter tests follow a similar pattern. Consider using
test.each
to reduce code duplication and make it easier to add more provider filter test cases.test.each([ { name: 'should exclude all Azure models', filter: '-*azure', expected: { 'gpt-3.5-turbo@openai': true, 'gpt-3.5-turbo@azure': false, // ... other expectations } }, { name: 'should exclude Google and XAI models', filter: '-*google,-*xai', expected: { 'gpt-3.5-turbo@openai': true, 'gemini-pro@google': false, // ... other expectations } } ])('$name when custom is "$filter"', ({ filter, expected }) => { const result = collectModelTable(mockModels, filter); Object.entries(expected).forEach(([model, available]) => { expect(result[model].available).toBe(available); }); });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/model.test.ts
(1 hunks)
🔇 Additional comments (1)
test/model.test.ts (1)
90-101
: LGTM! Comprehensive test for the "-all" filter.
The test thoroughly verifies that all models are disabled when using the "-all" filter.
test/model.test.ts
Outdated
} as LLMModelProvider, | ||
sorted: 2, | ||
}, | ||
{ | ||
name: 'gemini-pro', | ||
available: true, | ||
provider: { | ||
id: 'google', | ||
providerName: 'Google', | ||
providerType: 'google', | ||
} as LLMModelProvider, | ||
sorted: 3, | ||
}, | ||
{ | ||
name: 'claude-3-haiku-20240307', | ||
available: true, | ||
provider: { | ||
id: 'anthropic', | ||
providerName: 'Anthropic', | ||
providerType: 'anthropic', | ||
} as LLMModelProvider, | ||
sorted: 4, | ||
}, | ||
{ | ||
name: 'grok-beta', | ||
available: true, | ||
provider: { | ||
id: 'xai', | ||
providerName: 'XAI', | ||
providerType: 'xai', | ||
} as LLMModelProvider, | ||
sorted: 11, | ||
}, | ||
]; | ||
|
||
test('all models shoule be available', () => { | ||
const customModels = ''; | ||
const result = collectModelTable(mockModels, customModels); | ||
|
||
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | ||
expect(result['gpt-4@openai'].available).toBe(true); | ||
expect(result['gpt-3.5-turbo@azure'].available).toBe(true); | ||
expect(result['gpt-4@azure'].available).toBe(true); | ||
expect(result['gemini-pro@google'].available).toBe(true); | ||
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true); | ||
expect(result['grok-beta@xai'].available).toBe(true); | ||
}); | ||
test('should exclude all models when custom is "-all"', () => { | ||
const customModels = '-all'; | ||
const result = collectModelTable(mockModels, customModels); | ||
|
||
expect(result['gpt-3.5-turbo@openai'].available).toBe(false); | ||
expect(result['gpt-4@openai'].available).toBe(false); | ||
expect(result['gpt-3.5-turbo@azure'].available).toBe(false); | ||
expect(result['gpt-4@azure'].available).toBe(false); | ||
expect(result['gemini-pro@google'].available).toBe(false); | ||
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false); | ||
expect(result['grok-beta@xai'].available).toBe(false); | ||
}); | ||
|
||
test('should exclude all Azure models when custom is "-*azure"', () => { | ||
const customModels = '-*azure'; | ||
const result = collectModelTable(mockModels, customModels); | ||
|
||
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | ||
expect(result['gpt-4@openai'].available).toBe(true); | ||
expect(result['gpt-3.5-turbo@azure'].available).toBe(false); | ||
expect(result['gpt-4@azure'].available).toBe(false); | ||
expect(result['gemini-pro@google'].available).toBe(true); | ||
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true); | ||
expect(result['grok-beta@xai'].available).toBe(true); | ||
}); | ||
|
||
test('should exclude Google and XAI models when custom is "-*google,-*xai"', () => { | ||
const customModels = '-*google,-*xai'; | ||
const result = collectModelTable(mockModels, customModels); | ||
|
||
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | ||
expect(result['gpt-4@openai'].available).toBe(true); | ||
expect(result['gpt-3.5-turbo@azure'].available).toBe(true); | ||
expect(result['gpt-4@azure'].available).toBe(true); | ||
expect(result['gemini-pro@google'].available).toBe(false); | ||
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true); | ||
expect(result['grok-beta@xai'].available).toBe(false); | ||
}); | ||
|
||
test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => { | ||
const customModels = '-all, +*openai,gpt-4o@azure'; | ||
const result = collectModelTable(mockModels, customModels); | ||
|
||
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | ||
expect(result['gpt-4@openai'].available).toBe(true); | ||
expect(result['gpt-3.5-turbo@azure'].available).toBe(false); | ||
expect(result['gpt-4@azure'].available).toBe(false); | ||
expect(result['gemini-pro@google'].available).toBe(false); | ||
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false); | ||
expect(result['grok-beta@xai'].available).toBe(false); | ||
expect(result['gpt-4o@azure'].available).toBe(true); | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add test cases for error scenarios.
The test suite thoroughly covers the happy path but lacks tests for error cases. Consider adding tests for:
- Invalid filter syntax (e.g., "invalid-filter", "++model", "--provider")
- Empty model list
- Undefined/null inputs
Example test cases to add:
test('should handle invalid filter syntax gracefully', () => {
const invalidFilters = ['++model', '--provider', 'invalid-filter'];
invalidFilters.forEach(filter => {
expect(() => collectModelTable(mockModels, filter)).not.toThrow();
});
});
test('should handle empty model list', () => {
const result = collectModelTable([], '-all');
expect(Object.keys(result).length).toBe(0);
});
test('should handle undefined/null inputs', () => {
expect(() => collectModelTable(undefined, '-all')).not.toThrow();
expect(() => collectModelTable(mockModels, undefined)).not.toThrow();
});
test/model.test.ts
Outdated
test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => { | ||
const customModels = '-all, +*openai,gpt-4o@azure'; | ||
const result = collectModelTable(mockModels, customModels); | ||
|
||
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | ||
expect(result['gpt-4@openai'].available).toBe(true); | ||
expect(result['gpt-3.5-turbo@azure'].available).toBe(false); | ||
expect(result['gpt-4@azure'].available).toBe(false); | ||
expect(result['gemini-pro@google'].available).toBe(false); | ||
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false); | ||
expect(result['grok-beta@xai'].available).toBe(false); | ||
expect(result['gpt-4o@azure'].available).toBe(true); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix inconsistency with non-existent model in test.
The test asserts the availability of 'gpt-4o@azure' (line 140) which isn't defined in the mockModels array. This could lead to false positives as the test is checking behavior for a non-existent model.
Either:
- Add the model to mockModels array, or
- Remove the non-existent model from both the filter string and assertions:
- const customModels = '-all, +*openai,gpt-4o@azure';
+ const customModels = '-all, +*openai';
// ... other assertions ...
- expect(result['gpt-4o@azure'].available).toBe(true);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => { | |
const customModels = '-all, +*openai,gpt-4o@azure'; | |
const result = collectModelTable(mockModels, customModels); | |
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | |
expect(result['gpt-4@openai'].available).toBe(true); | |
expect(result['gpt-3.5-turbo@azure'].available).toBe(false); | |
expect(result['gpt-4@azure'].available).toBe(false); | |
expect(result['gemini-pro@google'].available).toBe(false); | |
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false); | |
expect(result['grok-beta@xai'].available).toBe(false); | |
expect(result['gpt-4o@azure'].available).toBe(true); | |
}); | |
test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => { | |
const customModels = '-all, +*openai'; | |
const result = collectModelTable(mockModels, customModels); | |
expect(result['gpt-3.5-turbo@openai'].available).toBe(true); | |
expect(result['gpt-4@openai'].available).toBe(true); | |
expect(result['gpt-3.5-turbo@azure'].available).toBe(false); | |
expect(result['gpt-4@azure'].available).toBe(false); | |
expect(result['gemini-pro@google'].available).toBe(false); | |
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false); | |
expect(result['grok-beta@xai'].available).toBe(false); | |
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这测试的是通过 custom_models 添加的模型,所以不在 mockModels 里是合理的
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@QAbot-zh,理解了,你的测试用例是专门验证通过 customModels
添加的模型(如 gpt-4o@azure
)即便在 mockModels
中不存在也应该被纳入结果,这确实是合理的测试场景。如果此处的业务逻辑就是“自定义传入的模型无需预先在 mockModels
列表中声明”,那现有测试实现方式就没问题。
如果后续想进一步明确此处的意图,可能可以在测试命名或注释中额外说明“此处测试的是自定义传入的新模型对最终结果可见性”,让后续维护者快速理解测试目的就可以了。
✏️ Learnings added
Learnt from: QAbot-zh
PR: ChatGPTNextWeb/ChatGPT-Next-Web#5992
File: test/model.test.ts:129-141
Timestamp: 2024-12-28T15:48:43.192Z
Learning: The test logic allows adding custom models not present in the mockModels array by using the customModels filter. This is intentional behavior to verify that customModels can introduce new models beyond the predefined list.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
解决大家长期困扰的问题,支持通过 - 指令移除不需要的 provider,尤其是 openai,azure 模型列表共生带来的困扰
#5988
#5050
📝 补充信息 | Additional Information
Summary by CodeRabbit
Documentation
New Features
Tests