Badger Academy Week 5

Badger Academy Week 5


Over the last week, we had worked on deployment and developing on top of the Flux/Om-like architecture that Viktor scaffolded beyond our eyes. This week we were joined by the digital poet Joe Stanton to spurt out the solutions to our woes.

It seems that there is an endless stream of problems on any project; we truly are being moulded into "True Grit" characters throughout this academy. Keeping up-to-date with development, we decided to halt that troubling issue of the front-end testing which has been lingering for the past few weeks ever since we came across that milestone.

These past two weeks, Alex joined us to scaffold out the frontend modular structure with proper SCSS styling! This week included the majority of Joe's work attempting to stub out the real-api calls temporarily for testing sessions on the smart tv.

Styling separation

The final solution which has proven well the last few weeks was inclusive of compiling the SCSS files in a Gulp build task.
Sadly without ruby being installed in the node docker container, we were unable to use SASS!
#INTERNALSASSVSLESSWARS

Adhering to the Assets, Module, Vendor and Partial folder structures, we learned a straight-forward way to scaffold styling.
Assets - Personal static files that aren't of stylesheet file type. (Image files and fonts!)
Modules - Different files for particular views/components pages within your application
Vendor - Generic files from open source frameworks or non-personal stylesheets
Partial - Potentially default mixins if you would like to label them in this way!

SVGs

SVGs are scalable vector graphics. In common fashion, we're using a responsive css grid framework (We extracted and imported the grid system out of Foundation; lovely if you ask), we definitely require scalable images otherwise our application will look ugly!
#PixelPerfect?
Thanks to Sari, Badger-Time has its gorgeous logo exported to SVG file format! Our proud work is labeled in perfect rendition on any device in any resolution our audience pleases.

Continous integration/deployment

We can frankly say that API development including the Rspec TDD approach is going smoothly.

Finally scaffolding the last part of the continuous integration + delivery process this week. That last part involves the Amazon S3 Cloudfront deployment for the frontend and a Digital Ocean droplet for the backend.

Both of these were relatively straight forward compared to other obstacles we had come across! Thank you open source contributions! (Namely the Gulp S3 plugin and Docker within Digital Ocean droplets)

Tiago created his own Node.js web hook module that on a POST request with a required token of course sent after the CircleCI tests had passed, would pull down the required Docker image with all the binary dependencies pre-installed and swap out the new git cloned production-ready version of the application into it.

For the frontend, deployment is done through running a basic Gulp 'deploy' task.
It's also good to note that environmental variables can be set in CircleCI to be read from that Gulp deploy task!
12Factorness!

Tiago's open source contribution:- https://github.com/tabazevedo/catch

As he likes to call it, CircleCI in less than ~50 lines!

Fighting the frontend tests

Our attempt of scaffolding out Stubby programatically had failed:- LXC container network address mapping issues.
Joe's fierce battle meant redirect the API endpoint to mocked out API routes responding with JSON datasets only for the duration of the Gulp 'test' task.
This required restarting the Node.js server inbetween each Cucumber.js test; absolutely brilliant!

At one point during the debugging process, Joe was unable to differentiate whether the correct 'Test API' was being requested from. He lazily evaluated the real API to force this confirmation. ...I know right

In the end Joe fathomed the situation but due to the restrictions and the obvious refusal to recreate the backend API logic in Node.js for the purpose of frontend testing, the result was static datasets. The situation remains unstirred.

A potential final option is to reroute the testing API endpoint to a deployed "Staging backend API" then deploy the backend API to production in succession.
This keeps the logic intact but at the same time separates such data pools.

For next week, Badger-Time faces OAUTH, serialising and sync diffing algorithms!
Lower woes, we'd agree; honestly.

Similar posts

Are you looking to build a digital capability?