-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relevance Augmentations #215
Comments
Saving the standardized metadata for the images can be tricky given that our metadata parser in the SDK seems unstable. @whilefoo can you add input on the state of this? I think @sshivaditya should add input on what's best for the vector embeddings to understand image contents. The problem I'm trying to solve here is that I don't want to ask LLMs to describe an image both in the conversation rewards and vector embeddings plugins. It's redundant work. So we can solve for this with my HTML comment metadata proposal above. |
/help |
Available Commands
|
+ Successfully set wallet |
/wallet 0xB13260bfEe08DcA208F2ECc735171B21763EaaF6 |
! Failed to register wallet because it is already associated with another user. |
/wallet 0x331D1C984A43087427BBC224Cb4aD9f019336e75 |
+ Successfully set wallet |
/start |
Tip
|
@0x4007 Can you provide more details on how the relevance score should be calculated? Are there specific weights or factors to consider for text, images, and links? |
That's already built and out of scope for this task. Refer to the existing prompts for inspiration on new prompts specific for these. |
Passed the disqualification threshold and no activity is detected, removing assignees: @sura1-0-1. |
/start |
! You do not have the adequate role to start this task (your role is: member). Allowed roles are: collaborator, admin. |
I finished my work should i PR then? |
There seems to be this issue that if i have description and title the amount of words are increasing leading to threshhold reach for task |
@gentlementlegen you should figure out how to best solve this problem. We want to ensure that images with high relevance are scored higher. I'm assuming adding the description will help a lot but not sure if that's the best way for the conversation rewards to interpret and do the right thing. @sshivaditya let us know what model is best and what settings for this use case. |
The usual models (such as 4o and Sonnet 3.5) would work, but a better choice would be Qwen VL, which excels in image understanding and handles OCR efficiently. Since we are already using OpenRouter, the only change needed is to switch the model type to one that supports vision. And, the input structure should be updated as follows: "messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Understand this image?"
},
{
"type": "image_url",
"image_url": {
"url": "<IMAGE_URL>"
}
}
]
}
] |
Got it thanks, just testing the changes and pushing it to PR today |
Important
|
Thank @gentlementlegen |
I have updated the code as per the required logic using OpenRouter but am currently unable to test it. Could you kindly confirm if an OpenRouter API key is necessary for testing? Additionally, do I need an OpenAI Plus or Pro membership to proceed with the testing? |
Make an open router account and get an api key to test |
There's a few high impact elements to determine the relevance of a comment. Which is why in the default formatting scoring we have high ($5) credit per instance.
image
Relevance currently doesn't understand image contents.
We can insert the image description inside of its "alt" attribute.
link
It could also be really interesting to pull the text contents from links to also score their relevance.
We can insert a summary inside of its title attribute.
code
It would be interesting to discuss this in a separate proposal because this seems like it could be really complex to implement well.
Original context ubiquity-os-marketplace/command-wallet#28 (comment)
The text was updated successfully, but these errors were encountered: