Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introducing the ability to remove specified Providers. #5992

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

QAbot-zh
Copy link

@QAbot-zh QAbot-zh commented Dec 28, 2024

💻 变更类型 | Change Type

  • feat
  • fix
  • refactor
  • perf
  • style
  • test
  • docs
  • ci
  • chore
  • build

🔀 变更说明 | Description of Change

解决大家长期困扰的问题,支持通过 - 指令移除不需要的 provider,尤其是 openai,azure 模型列表共生带来的困扰

#5988
#5050

📝 补充信息 | Additional Information

Summary by CodeRabbit

  • Documentation

    • Updated README with comprehensive environment variable configurations.
    • Enhanced documentation for model management and deployment options.
    • Clarified instructions for configuring access and model settings.
    • Updated "What's New" section to reflect the latest features and security measures.
  • New Features

    • Expanded support for configuring AI models from various providers.
    • Added flexibility in model selection and customization.
    • Introduced options to disable specific models and customize deployment settings.
  • Tests

    • Introduced a new test suite for verifying model availability scenarios.
    • Validated function behavior under various model availability scenarios influenced by custom model filters.

Copy link

vercel bot commented Dec 28, 2024

@QAbot-zh is attempting to deploy a commit to the NextChat Team on Vercel.

A member of the Team first needs to authorize it.

Copy link
Contributor

coderabbitai bot commented Dec 28, 2024

Warning

Rate limit exceeded

@QAbot-zh has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 14 minutes and 28 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 2db4caa and 3b5b496.

📒 Files selected for processing (1)
  • test/model.test.ts (1 hunks)

Walkthrough

The pull request introduces comprehensive updates to the NextChat application's README documentation and model management utility. The changes focus on expanding environment variable configurations, providing more detailed instructions for model deployment, and enhancing the flexibility of model selection. The README now includes extensive documentation for various API keys, model customization options, and deployment settings across multiple providers like OpenAI, Azure, Google, Anthropic, and others.

Changes

File Change Summary
README.md - Expanded environment variable documentation
- Added detailed configuration options for multiple AI providers
- Updated model management settings
- Refined access password and deployment instructions
app/utils/model.ts - Enhanced collectModelTable function
- Added conditional logic for model enabling/disabling by provider
test/model.test.ts - Introduced a new test suite for collectModelTable
- Added tests for various model availability scenarios based on custom filters

Possibly related PRs

Suggested labels

enhancement, planned, help wanted

Suggested reviewers

  • Dogtiti

Poem

🐰 Hop, hop, config's here to stay,
Models dance in a configurable array,
APIs lined up, secrets tucked tight,
NextChat's power shines ever so bright!
Configuration magic, a rabbit's delight! 🌟


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Dogtiti
Copy link
Member

Dogtiti commented Dec 28, 2024

既然改到这个地方了,如果方便可以针对此函数加一个单元测试吗
这种核心逻辑前期既然没有引入单元测试,后面针对于此的改动还是有单元测试兜底较好,也能减轻测试工作量

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Now that we have changed it to this place, if it is convenient, can we add a unit test for this function?
Since unit testing was not introduced in the early stage of this core logic, it is better to have unit testing to cover subsequent changes to this, which can also reduce the testing workload.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
test/model.test.ts (2)

79-79: Correct the minor typo in the test description.

The word "shoule" should be changed to "should" for accuracy.

-  test('all models shoule be available', () => {
+  test('all models should be available', () => {

56-56: Remove the empty array slot to avoid sparse arrays.

Line 56 has a trailing comma that introduces an empty slot in mockModels. Although JavaScript allows sparse arrays, it can lead to unintended behavior. Consider removing the trailing comma or using a placeholder value (undefined or similar) intentionally.

56-    ,
+    // Remove or replace with a placeholder if needed
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b09f458 and 4d6c82d.

📒 Files selected for processing (1)
  • test/model.test.ts (1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
test/model.test.ts

[error] 5-77: This array contains an empty slot.

Unsafe fix: Replace hole with undefined

(lint/suspicious/noSparseArray)

@QAbot-zh
Copy link
Author

既然改到这个地方了,如果方便可以针对此函数加一个单元测试吗 这种核心逻辑前期既然没有引入单元测试,后面针对于此的改动还是有单元测试兜底较好,也能减轻测试工作量

已添加该函数的单元测试

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Since the change has been made to this place, if it is convenient, can you add a unit test for this function? Since this core logic has not been introduced into unit testing in the early stage, it is better to have unit testing to cover subsequent changes to this, which can also reduce the testing workload.

A unit test for the function has been added

@Dogtiti Dogtiti self-assigned this Dec 28, 2024
@Dogtiti Dogtiti self-requested a review December 28, 2024 12:30
@Dogtiti
Copy link
Member

Dogtiti commented Dec 28, 2024

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


Some tests failed
https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/actions/runs/12526607615/job/34939350276?pr=5992

Copy link
Contributor

Your build has completed!

Preview deployment

@QAbot-zh
Copy link
Author

模型名称抄错了,已更正

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically.


The model name was copied incorrectly and has been corrected.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
test/model.test.ts (2)

78-78: Fix typo in test description

Change "shoule" to "should" in the test description.

-  test('all models shoule be available', () => {
+  test('should have all models available', () => {

129-141: Update test description and add validation for invalid patterns

  1. The test description doesn't match the implementation as it includes an Azure model (gpt-4o@azure).
  2. Consider adding tests for invalid filter patterns.
-  test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => {
+  test('should exclude all models except OpenAI and specific Azure models when custom is "-all, +*openai, gpt-4o@azure"', () => {

Add a new test for invalid patterns:

test('should handle invalid filter patterns gracefully', () => {
  const invalidPatterns = [
    '*invalid*pattern',
    'missing-operator*azure',
    '-incomplete+'
  ];
  
  invalidPatterns.forEach(pattern => {
    const result = collectModelTable(mockModels, pattern);
    // All models should remain available with invalid patterns
    expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
    expect(result['gpt-4@openai'].available).toBe(true);
    // ... add assertions for other models
  });
});
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c41c2b5 and fd8ad63.

📒 Files selected for processing (1)
  • test/model.test.ts (1 hunks)
🔇 Additional comments (1)
test/model.test.ts (1)

4-142: Well-structured test suite with good coverage

The test suite effectively validates the provider filtering functionality with comprehensive coverage of different scenarios. It includes:

  • Default behavior
  • Complete exclusion
  • Provider-specific exclusion
  • Multiple provider exclusion
  • Complex filter combinations

This addresses the PR objectives and provides good test coverage for the core functionality.

Comment on lines 82 to 89
expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5@azure'].available).toBe(true);
expect(result['gpt-4@azure'].available).toBe(true);
expect(result['gemini-pro@google'].available).toBe(true);
expect(result['claude-3-haiku-20240307@claude'].available).toBe(true);
expect(result['grok-beta@xai'].available).toBe(true);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix inconsistencies between mock data and test assertions

The model identifiers in the test assertions don't match the mock data:

  • Mock data has gpt-3.5-turbo but assertion uses gpt-3.5@azure
  • Mock data has claude-3-haiku-20240307@anthropic but assertion uses claude-3-haiku-20240307@claude

Apply this fix to ensure consistency:

-    expect(result['gpt-3.5@azure'].available).toBe(true);
+    expect(result['gpt-3.5-turbo@azure'].available).toBe(true);
-    expect(result['claude-3-haiku-20240307@claude'].available).toBe(true);
+    expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5@azure'].available).toBe(true);
expect(result['gpt-4@azure'].available).toBe(true);
expect(result['gemini-pro@google'].available).toBe(true);
expect(result['claude-3-haiku-20240307@claude'].available).toBe(true);
expect(result['grok-beta@xai'].available).toBe(true);
});
expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5-turbo@azure'].available).toBe(true);
expect(result['gpt-4@azure'].available).toBe(true);
expect(result['gemini-pro@google'].available).toBe(true);
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true);
expect(result['grok-beta@xai'].available).toBe(true);
});

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
test/model.test.ts (3)

1-76: Consider improving mock data organization and type safety.

The mock data setup is comprehensive, but consider these improvements:

  1. Move the mockModels array to a separate test fixture file for reusability across test files
  2. Replace as LLMModelProvider type assertions with proper object creation
- } as LLMModelProvider,
+ }: LLMModelProvider = {
+   id: 'openai',
+   providerName: 'OpenAI',
+   providerType: 'openai',
+ },

78-89: Fix typo in test name.

There's a spelling error in the test name: "shoule" should be "should"

-  test('all models shoule be available', () => {
+  test('all models should be available', () => {

103-127: Consider using test.each for provider filter tests.

These provider-specific filter tests follow a similar pattern. Consider using test.each to reduce code duplication and make it easier to add more provider filter test cases.

test.each([
  {
    name: 'should exclude all Azure models',
    filter: '-*azure',
    expected: {
      'gpt-3.5-turbo@openai': true,
      'gpt-3.5-turbo@azure': false,
      // ... other expectations
    }
  },
  {
    name: 'should exclude Google and XAI models',
    filter: '-*google,-*xai',
    expected: {
      'gpt-3.5-turbo@openai': true,
      'gemini-pro@google': false,
      // ... other expectations
    }
  }
])('$name when custom is "$filter"', ({ filter, expected }) => {
  const result = collectModelTable(mockModels, filter);
  Object.entries(expected).forEach(([model, available]) => {
    expect(result[model].available).toBe(available);
  });
});
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fd8ad63 and 2db4caa.

📒 Files selected for processing (1)
  • test/model.test.ts (1 hunks)
🔇 Additional comments (1)
test/model.test.ts (1)

90-101: LGTM! Comprehensive test for the "-all" filter.

The test thoroughly verifies that all models are disabled when using the "-all" filter.

Comment on lines 4 to 142
} as LLMModelProvider,
sorted: 2,
},
{
name: 'gemini-pro',
available: true,
provider: {
id: 'google',
providerName: 'Google',
providerType: 'google',
} as LLMModelProvider,
sorted: 3,
},
{
name: 'claude-3-haiku-20240307',
available: true,
provider: {
id: 'anthropic',
providerName: 'Anthropic',
providerType: 'anthropic',
} as LLMModelProvider,
sorted: 4,
},
{
name: 'grok-beta',
available: true,
provider: {
id: 'xai',
providerName: 'XAI',
providerType: 'xai',
} as LLMModelProvider,
sorted: 11,
},
];

test('all models shoule be available', () => {
const customModels = '';
const result = collectModelTable(mockModels, customModels);

expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5-turbo@azure'].available).toBe(true);
expect(result['gpt-4@azure'].available).toBe(true);
expect(result['gemini-pro@google'].available).toBe(true);
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true);
expect(result['grok-beta@xai'].available).toBe(true);
});
test('should exclude all models when custom is "-all"', () => {
const customModels = '-all';
const result = collectModelTable(mockModels, customModels);

expect(result['gpt-3.5-turbo@openai'].available).toBe(false);
expect(result['gpt-4@openai'].available).toBe(false);
expect(result['gpt-3.5-turbo@azure'].available).toBe(false);
expect(result['gpt-4@azure'].available).toBe(false);
expect(result['gemini-pro@google'].available).toBe(false);
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false);
expect(result['grok-beta@xai'].available).toBe(false);
});

test('should exclude all Azure models when custom is "-*azure"', () => {
const customModels = '-*azure';
const result = collectModelTable(mockModels, customModels);

expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5-turbo@azure'].available).toBe(false);
expect(result['gpt-4@azure'].available).toBe(false);
expect(result['gemini-pro@google'].available).toBe(true);
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true);
expect(result['grok-beta@xai'].available).toBe(true);
});

test('should exclude Google and XAI models when custom is "-*google,-*xai"', () => {
const customModels = '-*google,-*xai';
const result = collectModelTable(mockModels, customModels);

expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5-turbo@azure'].available).toBe(true);
expect(result['gpt-4@azure'].available).toBe(true);
expect(result['gemini-pro@google'].available).toBe(false);
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(true);
expect(result['grok-beta@xai'].available).toBe(false);
});

test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => {
const customModels = '-all, +*openai,gpt-4o@azure';
const result = collectModelTable(mockModels, customModels);

expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5-turbo@azure'].available).toBe(false);
expect(result['gpt-4@azure'].available).toBe(false);
expect(result['gemini-pro@google'].available).toBe(false);
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false);
expect(result['grok-beta@xai'].available).toBe(false);
expect(result['gpt-4o@azure'].available).toBe(true);
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add test cases for error scenarios.

The test suite thoroughly covers the happy path but lacks tests for error cases. Consider adding tests for:

  1. Invalid filter syntax (e.g., "invalid-filter", "++model", "--provider")
  2. Empty model list
  3. Undefined/null inputs

Example test cases to add:

test('should handle invalid filter syntax gracefully', () => {
  const invalidFilters = ['++model', '--provider', 'invalid-filter'];
  invalidFilters.forEach(filter => {
    expect(() => collectModelTable(mockModels, filter)).not.toThrow();
  });
});

test('should handle empty model list', () => {
  const result = collectModelTable([], '-all');
  expect(Object.keys(result).length).toBe(0);
});

test('should handle undefined/null inputs', () => {
  expect(() => collectModelTable(undefined, '-all')).not.toThrow();
  expect(() => collectModelTable(mockModels, undefined)).not.toThrow();
});

Comment on lines 129 to 141
test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => {
const customModels = '-all, +*openai,gpt-4o@azure';
const result = collectModelTable(mockModels, customModels);

expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5-turbo@azure'].available).toBe(false);
expect(result['gpt-4@azure'].available).toBe(false);
expect(result['gemini-pro@google'].available).toBe(false);
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false);
expect(result['grok-beta@xai'].available).toBe(false);
expect(result['gpt-4o@azure'].available).toBe(true);
});
Copy link
Contributor

@coderabbitai coderabbitai bot Dec 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix inconsistency with non-existent model in test.

The test asserts the availability of 'gpt-4o@azure' (line 140) which isn't defined in the mockModels array. This could lead to false positives as the test is checking behavior for a non-existent model.

Either:

  1. Add the model to mockModels array, or
  2. Remove the non-existent model from both the filter string and assertions:
-    const customModels = '-all, +*openai,gpt-4o@azure';
+    const customModels = '-all, +*openai';

     // ... other assertions ...
-    expect(result['gpt-4o@azure'].available).toBe(true);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => {
const customModels = '-all, +*openai,gpt-4o@azure';
const result = collectModelTable(mockModels, customModels);
expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5-turbo@azure'].available).toBe(false);
expect(result['gpt-4@azure'].available).toBe(false);
expect(result['gemini-pro@google'].available).toBe(false);
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false);
expect(result['grok-beta@xai'].available).toBe(false);
expect(result['gpt-4o@azure'].available).toBe(true);
});
test('should exclude all models except OpenAI when custom is "-all, +*openai"', () => {
const customModels = '-all, +*openai';
const result = collectModelTable(mockModels, customModels);
expect(result['gpt-3.5-turbo@openai'].available).toBe(true);
expect(result['gpt-4@openai'].available).toBe(true);
expect(result['gpt-3.5-turbo@azure'].available).toBe(false);
expect(result['gpt-4@azure'].available).toBe(false);
expect(result['gemini-pro@google'].available).toBe(false);
expect(result['claude-3-haiku-20240307@anthropic'].available).toBe(false);
expect(result['grok-beta@xai'].available).toBe(false);
});

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这测试的是通过 custom_models 添加的模型,所以不在 mockModels 里是合理的

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@QAbot-zh,理解了,你的测试用例是专门验证通过 customModels 添加的模型(如 gpt-4o@azure)即便在 mockModels 中不存在也应该被纳入结果,这确实是合理的测试场景。如果此处的业务逻辑就是“自定义传入的模型无需预先在 mockModels 列表中声明”,那现有测试实现方式就没问题。

如果后续想进一步明确此处的意图,可能可以在测试命名或注释中额外说明“此处测试的是自定义传入的新模型对最终结果可见性”,让后续维护者快速理解测试目的就可以了。


✏️ Learnings added
Learnt from: QAbot-zh
PR: ChatGPTNextWeb/ChatGPT-Next-Web#5992
File: test/model.test.ts:129-141
Timestamp: 2024-12-28T15:48:43.192Z
Learning: The test logic allows adding custom models not present in the mockModels array by using the customModels filter. This is intentional behavior to verify that customModels can introduce new models beyond the predefined list.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants