costs for LLM #393
-
Hi, Best regards |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
We don't inject the entire dataframe but the header. Therefore, given two dataframes with the same columns and the same question, there's no difference in the token consumption, as pandasAI will likely produce the same code. Then the execution result will change, but that's a different story. By the way, the total consumption can be displayed using """Example of using PandasAI with a Pandas DataFrame"""
import pandas as pd
from data.sample_dataframe import dataframe
from pandasai import PandasAI
from pandasai.llm.openai import OpenAI
from pandasai.helpers.openai_info import get_openai_callback
df = pd.DataFrame(dataframe)
llm = OpenAI()
# conversational=False is supposed to display lower usage and cost
pandas_ai = PandasAI(llm, enable_cache=False, conversational=True)
with get_openai_callback() as cb:
response = pandas_ai(df, "Calculate the sum of the gdp of north american countries")
print(response)
print(cb)
# The sum of the GDP of North American countries is 19,294,482,071,552.
# Tokens Used: 375
# Prompt Tokens: 210
# Completion Tokens: 165
# Total Cost (USD): $ 0.000750 The consumption data is retrieved from OpenAI API, taking into account multiple calls as well |
Beta Was this translation helpful? Give feedback.
-
Right now the code looks like this: import os
import pandas as pd
from pandasai import SmartDataframe
from pandasai.llm import OpenAI
from pandasai.helpers.openai_info import get_openai_callback
# Sample DataFrame
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
"sales": [5000, 3200, 2900, 4100, 2300, 2100, 2500, 2600, 4500, 7000]
})
llm = OpenAI(api_token="MY_OPENAI_API_KEY")
smart_df = SmartDataframe(df, config={"llm": llm})
with get_openai_callback() as cb:
response = smart_df.chat('Which are the top 5 countries by sales?')
print(response)
print()
print(f"Prompt tokens: {cb.prompt_tokens}")
print(f"Completion tokens: {cb.completion_tokens}")
|
Beta Was this translation helpful? Give feedback.
-
For Azure Open ai as llm config : Tokens Used: 0 it shows all 0 |
Beta Was this translation helpful? Give feedback.
We don't inject the entire dataframe but the header. Therefore, given two dataframes with the same columns and the same question, there's no difference in the token consumption, as pandasAI will likely produce the same code. Then the execution result will change, but that's a different story.
By the way, the total consumption can be displayed using
pandasai
callbacks as follows: