Skip to content

I try to show a use-case of the long context Gemini's new models (as of 2nd Dec, 2024) provide. Hopefully, it helps us fellow researchers.

License

Notifications You must be signed in to change notification settings

ArionDas/Gemini_Connect

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gemini_Connect

This is the official repository for my submission to the Kaggle Gemini Long Context Challenge.
Submission Notebook: https://www.kaggle.com/code/ariondas/iiit-ranchi-gemini-long-context-challenge

I try to show a use-case of the long context Gemini's new models (as of 2nd December, 2024) provide. I try to utilize the long context the aforementioned models provide to come up with related papers to a particular topic. Hopefully, it helps us fellow researchers.


Here are a few plots to highlight the models' performance:

Related Papers Plot:


Maximum Tokens Limit across models:


Time taken to generate responses:


REPORT

The Gemini-1.5 model variants claim upwards of 1 million context length (Gemini-1.5-pro claims 2 million). But, how do they fair in practise? I have prepared a report on my observations. All the details of my experiments are added to this report:

Report


I have also summarized the entire work in this video:

Is.Gemini.s.2M.context.length.a.myth_.Presenting.Gemini_Connect.for.Researchers.mp4

About

I try to show a use-case of the long context Gemini's new models (as of 2nd Dec, 2024) provide. Hopefully, it helps us fellow researchers.

Topics

Resources

License

Stars

Watchers

Forks

Languages