This is a fork of the original scrapy-athlinks. I decided to take over as I want to add features that were not originally available on the project.
athlinks_races
provides the RaceSpider
class.
This spider crawls through all results pages from a race hosted on athlinks.com, building and following links to each athlete's individual results page, where it collects their split data. It also collects some metadata about the race itself.
By default, the spider returns one race metadata object (RaceItem
), and one
AthleteItem
per participant.
Each AthleteItem
consists of some basic athlete info and a list of RaceSplitItem
containing data from each split they recorded.
Scrapy can be operated entirely from python scripts. See the scrapy documentation for more info.
The package is available on PyPi and can be installed with pip
:
python -m venv `$HOME/virtualenv/athlinks_races/`
. $HOME/virtualenv/athlinks_races/bin/activate
pip install athlinks_races
A demo script is included in this repo.
"""
Demonstrate the available classes.
You can run as python athlinks_races/demo.py
"""
from scrapy.crawler import CrawlerProcess
from athlinks_races import RaceSpider, AthleteItem, RaceItem
def main():
# Make settings for two separate output files: one for athlete data,
# one for race metadata.
settings = {
'FEEDS': {
# Athlete data. Inside this file will be a list of dicts containing
# data about each athlete's race and splits.
'athletes.json': {
'format': 'json',
'overwrite': True,
'item_classes': [AthleteItem],
},
# Race metadata. Inside this file will be a list with a single dict
# containing info about the race itself.
'metadata.json': {
'format': 'json',
'overwrite': True,
'item_classes': [RaceItem],
},
}
}
process = CrawlerProcess(settings=settings)
# Crawl results for the 2022 Leadville Trail 100 Run
process.crawl(RaceSpider, 'https://www.athlinks.com/event/33913/results/Event/1018673/')
process.start()
if __name__ == "__main__":
main()
If you do a pip install --editable .[lint,dev]
then you can run as
athlinks_cli
Then you can build the wheelhouse to install locally if needed:
python -m build .
Alternatively, you may clone this repo for use like a typical Scrapy project that you might create on your own.
python -m venv `$HOME/virtualenv/athlink_races`
. $HOME/virtualenv/athlink_races/bin/activate
git clone https://github.com/josevnz/athlinks-races
cd athlink-races
python install --editable .[lint,dev]
Run a RaceSpider
, few races with different years:
cd athlinks_races
scrapy crawl race -a url=https://www.athlinks.com/event/33913/results/Event/1018673 -O $HOME/1018673.json
scrapy crawl race -a url=https://www.athlinks.com/event/382111/results/Event/1093108 -O $HOME/1093108.json
scrapy crawl race -a url=https://www.athlinks.com/event/382111/results/Event/1062909 -O $HOME/1093108.json
All that is required is Scrapy (and its dependencies).
. $HOME/virtualenv/athlink_races/bin/activate
pytest tests/*.py
This project is licensed under the MIT License. See LICENSE file for details.
You can get in touch here:
- GitHub: https://github.com/josevnz
If you want to take a look at the original project. He is not in charge of this forked version.
- GitHub: github.com/aaron-schroeder