Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: model selection error handling #73

Closed
wants to merge 3 commits into from
Closed

feat: model selection error handling #73

wants to merge 3 commits into from

Conversation

nmandal
Copy link

@nmandal nmandal commented May 2, 2023

Motivation

Requested in #26 since GPT-4 nor GPT-4-32k isn't available for everyone yet and it would be a better experience to see something pop up instead of not being sure what happened.

Solution

On model change in settings:

  1. check if user has access to the model
  2. if not, show error toast and provides link to waitlist (note: I don't know where the gpt-4-32k waitlist is)

Checklist

  • Tested in Chrome
  • Tested in Safari

PR-Codex overview

This PR adds a new function checkModelAccess to validate access to OpenAI models and updates the SettingsModal component to use it when changing the selected model.

Detailed summary

  • Added checkModelAccess function to validate access to OpenAI models
  • Updated SettingsModal component to use checkModelAccess when changing the selected model
  • Added useToast hook to display error messages when changing to an unsupported model
  • Removed unused imports and variables

✨ Ask PR-Codex anything about this PR by commenting with /codex {your question}

@vercel
Copy link

vercel bot commented May 2, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
flux ✅ Ready (Inspect) Visit Preview May 2, 2023 0:20am

@@ -68,6 +72,29 @@ export const SettingsModal = memo(function SettingsModal({
}
};

const handleChangeModel = async (v: string) => {
if (await checkModelAccess(v)) {
Copy link
Author

@nmandal nmandal May 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add a loading state with the <Spinner /> when calling handleChangeModel? User experience feels less snappy when selecting new model with small delay.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmmm if we could do with a small diff that might be nice but its probably fine as is imo

@transmissions11 transmissions11 changed the title Model selection error handling feat: model selection error handling Jun 10, 2023
@transmissions11
Copy link
Member

hey sorry for the delay here — this code looks great! however... the whitelist checking just doesn't seem to work. i have both gpt-4 and 32k access and when i use the vercel preview for this PR i cant use either of them?

@adietrichs
Copy link
Contributor

I just opened #83, which has some overlap with this (but fetches model list on app launch / API key change, and offers all available gpt chat models to the user).

@transmissions11
Copy link
Member

@adietrichs correct me if im wrong but #83 basically replaces this PR, i dont see a reason for them to coexist — and #83 is probably the ideal solution

@transmissions11
Copy link
Member

feel free to correct me again if im missing something but going to close this PR for now as i feel #83 supersedes it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants