Releases: jupyterlab/jupyter-ai
v3.0.0a0
3.0.0a0
Hope you all have had a wonderful holiday season! Santa and I present to you the first pre-release of v3, the next major version of Jupyter AI. π
Rapid summary of what's new: In v3, all responsibility for managing the chat is now delegated to Jupyter Chat, a new project built with Jupyter AI components and a custom chat backend. By using Jupyter Chat, Jupyter AI now supports multiple chats, and automatically saves them as files on disk. This migration has already allowed us to greatly simplify our codebase, and will provide a fantastic foundation to build new features for users in v3. β€οΈ
- Thank you @brichet for leading development on Jupyter Chat!
- For more details, please see the full PR history of v3-dev using this link.
This pre-release is being published quickly to get feedback from contributors & stakeholders. v3 is still a work-in-progress, and we will absolutely build more features & fix more issues on top of this before the v3.0.0 official release. This is just the first pre-release of many more to come. πͺ
Known issues
There are already a few issues I've noticed, which I will call out below to help save you all some time:
- Opening the Jupyter AI settings is not obvious.
- From JupyterLab's top bar menu, click "Settings" => "AI Settings" (near the bottom) to open the Jupyter AI settings.
- You have to select a chat model and provide API keys for that chat model, otherwise the chat fails silently.
- You may have to wait a minute or two after starting the server before the chat responds to new inputs.
- This bug happens rarely, seemingly at random. I am monitoring this issue until I can reproduce it consistently.
- Pressing
Ctrl + C
in the terminal sometimes does not stop the server. This is a known issue withjupyter_collaboration
.- Workaround from the terminal: Press
Ctrl + Z
to suspend the server, then runkill -9 %1
from the same terminal.
- Workaround from the terminal: Press
Enhancements made
Contributors to this release
v2.28.4
2.28.4
π Merry Christmas and happy holidays to all! We have worked with Santa to bring you some enhancements & fixes for Jupyter AI. Notably, some embedding models now support a configurable base URL, and the reliability of /generate
has been significantly improved. Thank you @srdas for contributing these changes!
Note to contributors: This is planned to be the last v2 release from the main
branch. After the first v3 pre-release, main
will track Jupyter AI v3, while Jupyter AI v2 will continue to be maintained from the 2.x
branch.
Enhancements made
Bugs fixed
- Update
/generate
to not split classes & functions across cells #1158 (@srdas) - Fix code output format in IPython #1155 (@divyansshhh)
Maintenance and upkeep improvements
Documentation improvements
- Improve user messaging and documentation for Cross-Region Inference on Amazon Bedrock #1134 (@srdas)
Contributors to this release
(GitHub contributors page for this release)
@divyansshhh | @dlqqq | @krassowski | @mlucool | @srdas | @Zsailer
v2.28.3
2.28.3
This release notably fixes a major bug with updated model fields not being used until after a server restart, and fixes a bug with Ollama in the chat. Thank you for your patience as we continue to improve Jupyter AI! π€
Enhancements made
Bugs fixed
- Fix install step in CI #1139 (@dlqqq)
- Update completion model fields immediately on save #1137 (@dlqqq)
- Fix JSON serialization error in Ollama models #1129 (@JanusChoi)
- Update model fields immediately on save #1125 (@dlqqq)
- Downgrade spurious 'error' logs #1119 (@ctcjab)
Maintenance and upkeep improvements
Contributors to this release
(GitHub contributors page for this release)
@ctcjab | @dlqqq | @JanusChoi | @krassowski | @pre-commit-ci | @srdas
v2.28.2
v2.28.1
v2.28.0
2.28.0
Release summary
This release notably includes the following changes:
-
Models from the
Anthropic
andChatAnthropic
providers are now merged in the config UI, so all Anthropic models are shown in the same place in the "Language model" dropdown. -
Anthropic Claude v1 LLMs have been removed, as the models are retired and no longer available from the API.
-
The chat system prompt has been updated to encourage the LLM to express dollar quantities in LaTeX, i.e. the LLM should prefer returning
\(\$100\)
instead of$100
. For the latest LLMs, this generally fixes a rendering issue when multiple dollar quantities are given literally in the same sentence.- Note that the issue may still persist in older LLMs, which do not respect the system prompt as frequently.
-
/export
has been fixed to include streamed replies, which were previously omitted. -
Calling non-chat providers with history has been fixed to behave properly in magics.
Enhancements made
- Remove retired models and add new
Haiku-3.5
model in Anthropic #1092 (@srdas) - Reduced padding in cell around code icons in code toolbar #1072 (@srdas)
- Merge Anthropic language model providers #1069 (@srdas)
- Add examples of using Fields and EnvAuthStrategy to developer documentation #1056 (@alanmeeson)
Bugs fixed
- Continue to allow
$
symbols to delimit inline math in human messages #1094 (@dlqqq) - Fix
/export
by including streamed agent messages #1077 (@mcavdar) - Fix magic commands when using non-chat providers w/ history #1075 (@alanmeeson)
- Allow
$
to literally denote quantities of USD in chat #1068 (@dlqqq)
Documentation improvements
- Improve installation documentation and clarify provider dependencies #1087 (@srdas)
- Added Ollama to the providers table in user docs #1064 (@srdas)
Contributors to this release
(GitHub contributors page for this release)
@alanmeeson | @dlqqq | @krassowski | @mcavdar | @srdas
v2.27.0
2.27.0
Enhancements made
Documentation improvements
Contributors to this release
v2.26.0
2.26.0
This release notably includes the addition of a "Stop streaming" button, which takes over the "Send" button when a reply is streaming and the chat input is empty. While Jupyternaut is streaming a reply to a user, the user has the option to click the "Stop streaming" button to interrupt Jupyternaut and stop it from streaming further. Thank you @krassowski for contributing this feature! π
Enhancements made
- Support Quarto Markdown in
/learn
#1047 (@dlqqq) - Update requirements contributors doc #1045 (@JasonWeill)
- Remove clear_message_ids from RootChatHandler #1042 (@michaelchia)
- Migrate streaming logic to
BaseChatHandler
#1039 (@dlqqq) - Unify message clearing & broadcast logic #1038 (@dlqqq)
- Learn from JSON files #1024 (@jlsajfj)
- Allow users to stop message streaming #1022 (@krassowski)
Bugs fixed
- Always use
username
fromIdentityProvider
#1034 (@krassowski)
Maintenance and upkeep improvements
- Support
jupyter-collaboration
v3 #1035 (@krassowski) - Test Python 3.9 and 3.12 on CI, test minimum dependencies #1029 (@krassowski)
Documentation improvements
- Update requirements contributors doc #1045 (@JasonWeill)
Contributors to this release
(GitHub contributors page for this release)
@dlqqq | @JasonWeill | @jlsajfj | @krassowski | @michaelchia | @pre-commit-ci
v2.25.0
2.25.0
Enhancements made
- Export context hooks from NPM package entry point #1020 (@dlqqq)
- Add support for optional telemetry plugin #1018 (@dlqqq)
- Add back history and reset subcommand in magics #997 (@akaihola)
Maintenance and upkeep improvements
Contributors to this release
(GitHub contributors page for this release)
@akaihola | @dlqqq | @jtpio | @pre-commit-ci
v2.24.1
2.24.1
Enhancements made
- Make path argument required on /learn #1012 (@andrewfulton9)