Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: groq added #1047

Merged
merged 9 commits into from
Dec 20, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion js/examples/newsletter_twitter_threads/demo.mjs
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import { openai } from "@ai-sdk/openai";
import {groq} from '@ai-sdk/groq'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider maintaining consistent quote style in imports. The file uses double quotes for other imports but single quotes here. For consistency, suggest using: import { groq } from "@ai-sdk/groq";

import { VercelAIToolSet } from "composio-core";
import dotenv from "dotenv";
import { generateText } from "ai";
Expand Down Expand Up @@ -36,7 +37,7 @@ async function executeAgent(entityName) {

// Generate text using the model and tools
const output = await generateText({
model: openai("gpt-4o"),
model: groq("llama-3.3-70b-versatile"),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a comment explaining why the model was switched from GPT-4 to LLaMA and any expected behavioral differences. This will help future maintainers understand the reasoning behind this change.

streamText: false,
tools: tools,
prompt: `
Comment on lines 37 to 43

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Bug Fix:

Critical Evaluation of AI Model Replacement
The change from openai("gpt-4o") to groq("llama-3.3-70b-versatile") in the generateText function is significant and requires careful consideration due to its high impact on the application's functionality. Here are the steps to ensure a smooth transition:

  • Compatibility Check: Verify that groq("llama-3.3-70b-versatile") is fully compatible with the existing codebase. This includes checking for any API changes or differences in input/output handling.
  • Thorough Testing: Conduct comprehensive testing to ensure that the text generation output meets the application's quality standards. This should include unit tests, integration tests, and user acceptance tests to catch any regressions or unexpected behavior.
  • Performance Analysis: Evaluate the performance of the new model. Ensure it does not introduce latency or excessive resource consumption that could degrade user experience.
  • Configuration and Dependencies: Ensure all necessary configurations and dependencies for the new model are correctly set up. This might involve updating environment variables, configuration files, or dependency management systems.

By following these steps, you can mitigate the risks associated with this high-impact change. 🚀


Comment on lines 37 to 43

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential Issue:

Model Change Impact on Text Generation
The change from openai("gpt-4o") to groq("llama-3.3-70b-versatile") is significant and could impact the functionality and reliability of the text generation process. It is crucial to review the compatibility of the new model with the existing system requirements. Ensure that the output generated by groq("llama-3.3-70b-versatile") aligns with the expected results and does not introduce logical errors. Conduct thorough testing to validate the model's performance and output quality.


Comment on lines 37 to 43

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential Issue:

Model Change Impact on Text Generation
The change from 'gpt-4o' to 'llama-3.3-70b-versatile' is significant and could impact the application's functionality. It's crucial to verify the compatibility of the new model with the existing system requirements. Conduct thorough testing of the text generation outputs to ensure they meet the expected standards. If the new model introduces issues, consider reverting to the previous model or selecting an alternative that aligns better with the application's needs.


Comment on lines 37 to 43

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential Issue:

Switching Language Model from 'gpt-4o' to 'llama-3.3-70b-versatile'
The change from 'gpt-4o' to 'llama-3.3-70b-versatile' is significant and could impact the text generation process. It's crucial to ensure that the new model is compatible with the existing system and meets the required performance and accuracy standards.

Actionable Steps:

  • Compatibility Check: Verify that 'llama-3.3-70b-versatile' integrates well with the current system architecture.
  • Testing: Conduct comprehensive tests to ensure the model's output aligns with expectations in terms of performance and accuracy.
  • Update Dependencies: If necessary, update any dependent components or configurations to support the new model.

This change is critical and should be handled with caution to avoid potential disruptions in functionality.


Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
OPENAI_API_KEY=KEY
COMPOSIO_API_KEY=KEY
39 changes: 39 additions & 0 deletions python/examples/advanced_agents/recruiter_agent/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
from composio_llamaindex import ComposioToolSet, App, Action
from llama_index.core.agent import FunctionCallingAgentWorker
from llama_index.core.llms import ChatMessage
from llama_index.llms.openai import OpenAI
from dotenv import load_dotenv

load_dotenv()
toolset = ComposioToolSet(api_key=os.getenv("COMPOSIO_API_KEY"))
tools = toolset.get_tools(apps=[App.PEOPLEDATALABS, App.GOOGLESHEETS])

llm = OpenAI(model="gpt-4o")

spreadsheetid = '14T4e0j1XsWjriQYeFMgkM2ihyvLAplPqB9q8hytytcw'
# Set up prefix messages for the agent
prefix_messages = [
ChatMessage(
role="system",
content=(
f"""
You are a recruiter agent. Based on user input, identify 10 highly qualified candidates using People Data Labs.
After identifying the candidates, create a Google Sheet with their details for the provided candidate description, and spreadsheet ID: ${spreadsheetid}.
Print the list of candidates and their details along with the link to the Google Sheet.
"""
),
)
]

agent = FunctionCallingAgentWorker(
tools=tools,
llm=llm,
prefix_messages=prefix_messages,
max_function_calls=10,
allow_parallel_tool_calls=False,
verbose=True,
).as_agent()

candidate_description = 'Senior Backend developers in San Francisco with prior experience in Python and Django'
user_input = f"Create a candidate list based on the description: {candidate_description}"
response = agent.chat(user_input)
28 changes: 28 additions & 0 deletions python/examples/advanced_agents/recruiter_agent/readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Recruiter Agent

This guide offers comprehensive instructions for creating a Recruiter Agent that utilizes Composio and agentic frameworks like LlamaIndex and ChatGPT. This agent is designed to effectively identify candidates for your business and compile all candidate data into a spreadsheet.

## Steps to Run

**Navigate to the Project Directory:**
Change to the directory where the `setup.sh`, `main.py`, `requirements.txt`, and `README.md` files are located. For example:
```sh
cd path/to/project/directory
```

### 1. Run the Setup File
Make the setup.sh Script Executable (if necessary):
On Linux or macOS, you might need to make the setup.sh script executable:
```shell
chmod +x setup.sh
```
Execute the setup.sh script to set up the environment and install dependencies:
```shell
./setup.sh
```
Now, fill in the `.env` file with your secrets.

### 2. Run the Python Script
```shell
python cookbook/python-examples/advanced_agents/recruiter_agent/main.py
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
composio-llamaindex
gradio
34 changes: 34 additions & 0 deletions python/examples/advanced_agents/recruiter_agent/setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
#!/bin/bash

# Create a virtual environment named sqlagent
echo "Creating virtual environment..."
python3 -m venv lead_generator

# Activate the virtual environment
echo "Activating virtual environment..."
source lead_generator/bin/activate

# Install libraries from requirements.txt
echo "Installing libraries from requirements.txt..."
pip install -r requirements.txt

# Copy env backup to .env file
if [ -f ".env.example" ]; then
echo "Copying .env.example to .env..."
cp .env.example .env
else
echo "No .env.example file found. Creating a new .env file..."
touch .env
fi

# Prompt the user to enter the OPENAI_API_KEY
read -p "Enter your OPENAI_API_KEY: " OPENAI_API_KEY

# Update the .env file with the entered OPENAI_API_KEY
sed -i "s/^OPENAI_API_KEY=.*$/OPENAI_API_KEY=$OPENAI_API_KEY/" .env
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sed command used here is not compatible with macOS by default. Consider using sed -i '' "s/^OPENAI_API_KEY=.*$/OPENAI_API_KEY=$OPENAI_API_KEY/" .env for macOS compatibility.


echo "OPENAI_API_KEY has been set in the .env file"

echo "Please fill in the .env file with any other necessary environment variables."

echo "Setup completed successfully!"
Loading