You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thank you so much for providing the tool! I ran into the following error messages when running the command the second time. I am wondering if you know how to fix this? Much appreciated!
Traceback (most recent call last):
File "async_pubmed_scraper.py", line 268, in <module>
loop.run_until_complete(build_article_urls(search_keywords))
File "C:\Users\DENGYX3\Anaconda3\lib\asyncio\base_events.py", line 616, in run_until_complete
return future.result()
File "async_pubmed_scraper.py", line 215, in build_article_urls
await asyncio.gather(*tasks)
File "async_pubmed_scraper.py", line 171, in get_pmids
pmids = soup.find('meta',{'name':'log_displayeduids'})['content']
TypeError: 'NoneType' object is not subscriptable
The text was updated successfully, but these errors were encountered:
I noticed this happened when I put the number of pages too large for example: python async_pubmed_scraper --pages 10000 --start 2018 --stop 2020 --output article_data.
Is there anyway I can get all the abstracts that is related to the keywords without setting the number of pages?
Hi,
Thank you so much for providing the tool! I ran into the following error messages when running the command the second time. I am wondering if you know how to fix this? Much appreciated!
The text was updated successfully, but these errors were encountered: