Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error when running the command #3

Open
yudeng2022 opened this issue Jan 31, 2023 · 1 comment
Open

error when running the command #3

yudeng2022 opened this issue Jan 31, 2023 · 1 comment

Comments

@yudeng2022
Copy link

Hi,
Thank you so much for providing the tool! I ran into the following error messages when running the command the second time. I am wondering if you know how to fix this? Much appreciated!


Traceback (most recent call last):
  File "async_pubmed_scraper.py", line 268, in <module>
    loop.run_until_complete(build_article_urls(search_keywords))
  File "C:\Users\DENGYX3\Anaconda3\lib\asyncio\base_events.py", line 616, in run_until_complete
    return future.result()
  File "async_pubmed_scraper.py", line 215, in build_article_urls
    await asyncio.gather(*tasks)
  File "async_pubmed_scraper.py", line 171, in get_pmids
    pmids = soup.find('meta',{'name':'log_displayeduids'})['content']
TypeError: 'NoneType' object is not subscriptable
@yudeng2022
Copy link
Author

I noticed this happened when I put the number of pages too large for example:
python async_pubmed_scraper --pages 10000 --start 2018 --stop 2020 --output article_data.

Is there anyway I can get all the abstracts that is related to the keywords without setting the number of pages?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant