The purpose of this repo is to spin up an environment that simulates a retail POS end point management scenario. It's specifically built do highlight the EAS Automate dashboard capabilities, Habitat service configuration coordination, and Habitat application lifecycle features.
THIS REPO IS CURRENTLY UNDER DEVELOPMENT. Contact Greg Sloan if you have any questions or want to collaborate.
15 instances in total will be deployed.
- An Automate instance (populated with API token you provide)
- 2 QA Instances
- 3 "Stores" - each consisting of 4 instances.
- 2 Registers
- 1 Back Office
- 1 Permanent Peer
The deployed infrastructure is meant to represent a retail chain with multiple stores, multiple applications running in the stores, and multiple hardware profiles running in the stores. Their basic topologies are the same (2 registers and 1 back office system), however Store1 and Store2 are running on an older "Gen1" hardware set, while Store3 is running on the latest "Gen2" hardware set.
This information will be reflected in the EAS dashboard as "Environment", "Site", and "Application" parameters, allowing an admin to sort and filter through the data to focus in on the store, instance, set of instances, etc. that they need to work with.
-
Terraform (>=0.11.14, <0.12)
-
Cloud Account: AWS
-
One or two standalone Habitat deployable application to run on the "register" and "backoffice" systems that are deployed. Any application will work if you just need to show the EAS Automate dashboards, or, use the ones that are referenced by default (gsloan-chef/retail-register-app and gsloan-chef/retail-backoffice-app), which are designed to work with the whole demo flow.
Clone this repo to a working directory on your development workstation.
OPTIONAL
- for "Phase 3" of the demo
- Fork and clone
https://github.com/gsloan-chef/retail-register-app
- Modify habitat/plan.sh with your origin
- Build and upload habitat package to your orogin
- Promote package to channel(s) per terraform.tfvars settings in the
Set terraform.tfvars value
section
Before deploying with terraform
you must first login to AWS using okta_aws (https://github.com/chef/okta_aws)
Ensure that there is a valid release of the application configured in terraform.tfvars (below) in the channels that are specified in terraform.tfvars (you will set a channel for each store, and a qa channel. These can be all the same for simplicity, or different to demonstrate channel selection)
An A2 Instance will be spun up by this repo. You will be required to provie an "automate_token". This can be any validly-formatted A2 API token, it will be added to the new A2 system and used to as the token for all of the other resources
- Change directories into the repo
- Change directories into terraform/
- Copy tfvars.example to a terraform.tfvars
- Fill out aws account details
- Set disable_event_tls = "true" to enable Application dashboard feature
- Set your values for the "Automate Config" section.
- If using your own application package, modify "POS Application Config" section
- run terraform init
- run terraform apply
- note resulting automate ip, automate url, automate token, and automate admin password
- The terraform output will contain a block of IP and host names, ready to be copied into /etc/hosts for easy access to the instances
Once deployed, there should be 14 nodes listed under both Infrastructure and Compliance tabs in Automate. On the Applicaiton tab there will be 22 application groups displayed, some will have multiple instances grouped under them. One of the application groups will be displaying a health-check warning status.
- Log in to Automate, show the EAS "Applications" dashboard. Explain that this retailer is operating 3 stores, with the same application operating on 2 different generations of hardware, with registers and back office systems.
- Briefly introduce the Compliance and Infrastructure tabs, explaining that the systems are being managed with Chef Infra and Inspec delivered as Habitat packages (visible in the Applicaitons tab)
- Return to the Applications tab and identify the application instance with the WARNING status. Click on the application group and identify it as "Store2_Register1" running on the older "Gen1" infrastructure
- Bouns points (resolve the WARNING):
ssh -i ~/.ssh/my_aws_key centos@Store2_Register1
journalctl -fu hab-sup
- watch for health-check message:Register App Log Too Large (/hab/logs/register.log)
sudo rm /hab/logs/register.log
- Return to Applications dashboard to see WARNING status clear
- Demonstrate the various filtering and sorting options available in the Applications dashboard
This section assumes the default gsloan-chef/retail-register-app (or your own fork) package is being deployed
Each store is configured in its own service ring with its own permanent peer. We can inject store-specific configuration anywhere on that ring and all of the instances on that ring will pick up the config change.
- In a browser, load both Register instances from one of the stores (e.g. http://Store1_Register1:8080/retail-register-app/Register and http://Store1_Register1:8080/retail-register-app/Register). Note the message of the day in the header (default = "We are doing pretty well!")
- Connect to a permanent peer instance (e.g.
ssh -i ~/.ssh/my_aws_key centos@Store1_PermanentPeer
) - Create a new file, motd.toml and add a different value for the motd
motd = "Top Selling Store in the Region"
sudo hab config apply regail-register-app.default $(date +%s) motd.toml
- Refresh the store browsers to show the new message displayed on both.
- Use this an on opportunity to talk about how any new register will pick up its configuration from the ring, no need to configure each box (e.g. the Store # in the header is also configurable)
This section requires your own fork of this repo https://github.com/gsloan-chef/retail-register-app
We will make a change to the HTML code that builds the retaiil app, test that change locally, and promote it through channels.
- Clone your fork of the retail-register-app repot
- Make the code change
- Either copy contents of
calc_discounts.html
intohabitat/config/calc.html
- OR directly edit
habitat/config/calc.html
and add<td bgcolor=Pink rowspan=3 align=center>Apply Discount<br><img width=40 src=images/percent.png></td> <!-- Adding "Apply discount" funcitonality-->
after line 103
- `export HAB_DOCKER_OPTS='-p 8080:8080'
hab studio enter
build
- Load the application locally to validate changes
source results/last_build.env
- `hab svc load $pkg_ident
- Open a brower and load
http://localhost:8080/retail-register-app/Register
- A new "Apply Discount" button should now display
- Open QA instance in a browser, e.g.
http://QA_Gen1_Register:8080/retail-register-app/Register
- Open a prduciton instance in a browser, e.g. 'http://Store2_Register0:8080/retail-register/Register1
hab pkg upload results/$pkg_artifact
- If you use a channel other than
unstable
forqa_channel
interraform.tfvars
, runhab pkg promote $pkg_ident {qa_channel}
- Refresh QA instance in browser. In about a minute, the new version should display. Show that the produciton instance isn't affected
hab pkg promote $pkg_ident Gen1_Prod
(or substitute the channel that was set interraform.tfvars
for the store you loaded in your browser)- Refresh the produciton browser to show the update.