Scremsong is intended to be run within docker.
To get started:
docker-compose up
- Run
docker-compose up db
and runCREATE SCHEMA "scremsong";
- Load CSVs from the
initial-data
folder 2.1 Loadcolumns.csv
intoapp_socialcolumns
2.2 Loademails.csv
intoapp_allowedusers
2.3 Loadtweet_replies.csv
intoapp_tweetreplies
yarn install
infrontend/
- Add
127.0.0.1 scremsong.test.democracysausage.org
to/etc/hosts
- Create your self-signed SSL cert (see below)
- Run
db/scripts/replace-dev-with-prod.sh
to initialise your database with the latest state in PROD docker-compose up db
and empty theapp_socialplatforms
table to clear our stored credentials- Create a new set of Twitter credentials per the steps in
twitter_auth_step1()
inviews.py
You're good to go! Navigate to https://scremsong.test.democracysausage.org
brew install mkcert
mkcert -install
mkdir keys && cd $_
mkcert wildcard.democracysausage.org
Add a Python Social Auth backend of your choice. e.g. Social backends.
Assuming you're configuring Google as a backend for auth:
Refer to PySocialAuth Google and Google - Using OAuth 2.0 to Access Google APIs.
- Create a Web application OAuth 2 Client in the Google API's Console
- Add
https://localhost:8001
as an Authorised JavaScript origin - Add
https://localhost:8001/complete/google-oauth2/
as an Authorised redirect URI - Enable the Google+ API
- Add
- Copy
django/web-variables.env.tmpl
todjango/web-varibles.env
- Add the resulting Client Id and Secret to
django/web-variables.env
- Nuke and restart your Docker containers
- Navigate to
https://localhost
, choose Google as your signon option, and you should be sent through the Google OAuth flow and end up back athttps://localhost
with your username displayed on the app.
Now you're up and running!
Making yourself an admin:
Hop into your running django
Docker container:
docker exec -i -t scremsong_django_1 sh
And enter the Django Admin shell:
django-admin shell
from django.contrib.auth.models import User
User.objects.all()
user=_[0]
user.is_staff = True
user.is_superuser = True
user.save()
user.profile.is_approved = True
user.profile.save()
Choose the next VERSION
number.
./prodbuild.sh all
./prodbuild-dockerpush.sh VERSION all
- AWS S3 hosts the
Public
andAdmin
sites. - CloudFlare sits in front and handles caching and gives us HTTPS.
- Travis CI handles automatic deploys from GitHub for us.
- Duck CLI to ftp sync the legacy PHP API.
- S3 bucket setup for static website hosting, bucket policy set to public, and error document set to
index.html
to let React Router work. 1.1 A second www.democracysausage.org bucket is setup to redirect requests to https://democracysausage.org - CloudFlare just has default settings except for these Page Rules: 2.2 api.democracysausage.org/* Cache Level: Bypass 2.3 democracysausage.org/static/* Cache Level: Standard, Edge Cache TTL: A Month (Because S3 sends No Cache headers by default.) 2.3 democracysausage.org/icons/* Cache Level: Standard, Edge Cache TTL: A Month (Because S3 sends No Cache headers by default.)
- Travis CI setup with default settings to pull from
.travis.yml
with environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION, CF_ZONE_ID, CF_EMAIL, CF_API_KEY, FTP_USERNAME, FTP_PASSWORD, FTP_PATH, REACT_APP_MAPBOX_API_KEY_PROD
{
"Version": "2012-10-17",
"Id": "PublicBucketPolicy",
"Statement": [
{
"Sid": "Stmt1482880670019",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::BUCKET_NAME_HERE/*"
}
]
}
We've rolled our own CD system using Travis-CI and Digital Ocean's API.
- Docker Whale in Digital Ocean [or] Automated Continuous Delivery Flow For Simple Projects (Ultimately we didn't take this approach because of (a) the Docker Cloud dependency [the fewer third party deps the better] and (b) it wasn't clear if we could do a 100% rock solid CD with Docker Cloud [i.e. Wait until whole stack is up, exfiltrate log files])
- How To Use Floating IPs on DigitalOcean (Includes discussion of using Floating IPs for high availability - which we're not doing [yet!])
Per Travis-CI's documentation on encrypting multiple files containing secrets.
tar cvf secrets.tar secrets/travis.env secrets/scremsong-frontend.prod.env
travis encrypt-file --force secrets.tar
As part of deployment of a new droplet we also shutdown and cleanup the old droplet - but a key requirement there was archiving the nginx log files in S3. To do that we 'just' need our Python-based deploy script in Travis to be able to SSH into our Droplet and get to the files created by the Dockerised nginx container.
Sounds easy? Yes and no. To cut a long story short - we're using scp
and Docker volume mapping.
The tricky bit was dealing with the SSH keys. Ultimately we found this comprehensive and well written guide: How to Encrypt/Decrypt SSH Keys for Deployment on Travis-CI.
In short, we have:
- Created a new passwordless SSH key only for use by Travis and our Droplets (aka
deploy_key
) - Added the public key to our Digital Ocean account
- The public keys in our Digital Ocean account are added to the
authorized_keys
file on all droplets we create viadeploy.py
- Used the Travis-CI CLI's
encrypt-file
command to encrypt our private key and embed it in our.travis.yml
using our Travis-CI public key - Strung a few more commands together (per the aforementioned guide) in
.travis.yml's
before_install
section that decrypt the key, start the SSH agent, and add the key as our preferred SSH key. - In
docker-compose-prod.yml
we've simply mapped./logs:/var/log
so we can more easily get at the nginx logs (and others in the future?) - In
deploy.py
when we're shutting down a Droplet we usesubprocess
to callscp
to SSH into the Droplet and copy the relevant logs out to our Travis-CI server. - Lastly, we rename and datestamp the logs and pop them up into S3.
Phew!
Lessons learned?
Paying for Travis-CI would have let us just add SSH keys directly via the Web UI (at the cost of USD$69/month!)
In working out how to do this we ran across quite a few hiccups and "WTF" moments. Here's a collection of useful resources for our own record, and for any brave future explorers how happen upon this readme in search of the same solution.
- How to Encrypt/Decrypt SSH Keys for Deployment (The guide we followed - its GitHub issue is here.)
- Related, a Gist demonstrating how to encrypt keys and secrets in
.travis.yml
files - Another Gist showing how to squeeze private SSH key into
.travis.yml
file - Using
subprocess
to run commands and await output
- Best practices for hardening new sever in 2017
- 7 Security Measures to Protect Your Servers
- What do you do with your first five minutes on a new server?
We're using CloudFlare's neat Origin CA service to secure traffic between our Droplet and CloudFlare. We used this guide to implement it. Steps in brief:
- Configure
cloud-config-yaml
to install thecfca
package - CloudFlare's CLI for issuing certificates. - Just before we
docker-compose up
incloud-config-yaml
we runcfca getcert
to grab our cert and key from CloudFlare and deploy it to/scremsong/app/
. - Nginx is configured per these instructions.
- Updated our DigitalOcean Firewall to disallow all traffic on port 80.
- Changed our CloudFlare SSL mode from "Flexible" to "Full" to force CF to talk to our droplet over HTTPs.
We build and push our Docker images to Docker Hub to speed up our deploy time. If make a change to one of the Dockerfiles you'll need to:
- Build, tag, and push a new image to Docker Hub at
version
.
docker build -t keithmoss/scremsong-app ./app/
docker tag keithmoss/scremsong-app keithmoss/scremsong-app:`version`
docker push keithmoss/scremsong-app
docker build -t keithmoss/scremsong-django ./django/
docker tag keithmoss/scremsong-django keithmoss/scremsong-django:`version`
docker push keithmoss/scremsong-django
-
Update
docker-compose-prod.yml
with the newversion
and commit. -
Allow the normal deploy process to do its thing - it'll grab the new Docker image from Docker Hub.
(Remember to docker login
to Docker Hub first.)
- yarn outdated
- yarn upgrade --latest
- depcheck
Note:: babel-loader@8.1.0 because CRA needs at least 8.1.0 and react-tweet has (mistakenly) probably got the older version declared as a regular dependency
poetry show --outdated
- Moving a static website to AWS S3 + CloudFront with HTTPS
- Host a Static Site on AWS, using S3 and CloudFront
- S3 Deployment with Travis
- Setting up a continuously deployed static website with SSL
- Deploying a static site to Github Pages using Travis and Cloudflare
- Secure and fast GitHub Pages with CloudFlare
- How to get your SSL for free on a Shared Azure website with CloudFlare