-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Full-page screenshot when extracting page URL #24
Comments
Hey @michael-supreme , this should be the default behaviour already. In
If it is not scrolling automatically for you, you can post the link you're trying to extract and I can take a closer look. |
@emcf Seems it works on some pages but not others. For example, on this contact us page, I get the full page captured in multiple screenshots for every 720px of page height. But on this homepage, it stops after the second chunk (wondering if it fails due to scripts or animations on the page?). Also, the homepage in the original post has the same issue, where it stops after the second screenshot. |
@michael-supreme Thanks for providing these links to replicate, currently still investigating this issue |
@emcf Just wanted to let you know that the issue also happens then setting the extraction to text_only=True - It appears to only extract the text content for the first 720px of the page |
I'm running thepipe locally to extract some page URLs for processing with GPT4o, and it seems that the image generated for each page only captures the content above the fold (See example below). Is there a method to have it capture the entire page to be processed? (perhaps an argument such as fullPage=True/False)
My token limit for GPT4o as part of my plan is 10M, so I'm not overly concerned with hitting limits.
Example image: https://imgur.com/a/a06g3lh
The text was updated successfully, but these errors were encountered: