Replies: 1 comment
-
PandasAI has a caching mechanism that might be causing the issue you're experiencing. The cache stores the results of previous queries, which can affect subsequent queries by different models. This caching is enabled by default and is intended to speed up query retrieval and reduce API call costs. To avoid this issue, you can disable the cache by setting the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
-
System Info
Python 3.12
pandasai 2.4.0
🐛 Describe the bug
Hello, I am using PandsAI in my project, I want to compare and test the retrieval accuracy of different models, but I found a problem, that is, when I use the same query statement to query the data of different models such as ChatGPT and local LLM on the same excel sheet, the query result of the previous model seems to have an impact on the latter result, such as the query error of the previous model, and the same error will occur when the latter model is queried, and the answer will be given quickly, like I've memorized the wrong answer and didn't do a new search process, is this normal, or how can I avoid it?
Beta Was this translation helpful? Give feedback.
All reactions