Usually frontend DevOps end up being nothing more than concatenation and minification of the application assets, however it is only a small portion of what could be achieved.
We want to optimise our workflow from the point of writing the code to the point where it is run in a browser. We want to ensure that we do not suffer quality loss, even when we have strict and harsh deadlines. We want our code to be sustainable and we definitely do not want to end up in a situation where making changes in half-year old piece of code is terrifying. We want our code to be scalable, to be able remove or add things without thinking that something, that you are not even aware of, could be broken because of those changes. We want to reduce technical debt, it is important to understand that no matter how great your code or your workflow is, there will always be a technical debt, however we have ability to reduce or slow it down.
First of all it is important to understand what steps and in which order we want to take to deliver a feature for our application. Everything starts with the discussion, we want to make sure that when we are leaving the meeting room everyone understand what are we building. Developers and QA’s can then start figuring out the detailed working process of said features, creating different scenarios.
All of that goes back to management and we make sure that we all agree that this is exactly how our feature supposed to work and well, we start building it. We write tests — they fail, we write code — test pass, console is green, we are good to go.
To do all that, we will need some sort of compilation step, that will take all of our code, assets, whatever, bundle it up so we can actually run our code locally.
Then we have our testing step. We want to make sure that our code is running properly and we are getting the expected output.
All of that does not free us from doing reviews. Code review is something that should always be a part of your workflow. Your colleagues might see things that you have not noticed or have idea about alternative implementation of some part of your code.
Eventually we need to have the ability to build our application. Everyone in your team will have a project setup and we want to give them ability to build our application on their machine without any struggle, not just “works on my local”.
Deploy. We will have the development, staging, live environments and we need a process that will put our code to those remote machines.
No matter what we do, there should be manual testing. Someone should open our application and check it on different devices, etc. There should not be 100% trust in your pipeline.
Finally we are going back to discussion of yet another new feature. This circle will always keep going, but to prevent it from stopping we need strong DevOps processes.
So we want to separate our quality steps in two levels and make the ones that are fastest to start first so we could fail fast and receive instant feedback, if our code is failing due to parse error there is no reason to start testing processes at all.
We can use Gulp, Grunt, npm scripts, Webpack as your task runner/bundler. It is also great to use Babel or TypeScript which are both great compilers that allows us to use specific functionality that is not necessary out yet and not yet supported by all browsers, but at the end of the day your team writes in one language and it just works everywhere. Next goes ESLint, it is a basic compiler that has a lot of validations behind it, we can configure it depending on our team’s needs and we will not have to worry that our code styling is different from function to function.
To wrap up with compilation we want to run complexity test, we do not want to overcomplicate our code, we do not want 100 line functions or classes that import long list of other classes. We want to keep it at a decent level so it would be possible to maintain it and developers that end up working with that part of the code would not feel overwhelmed. Plato is a great tool that will create a great report for you.
We want our application to have a decent test coverage, not necessary 100%, but the crucial parts must be covered. The important thing is that we need to look at our application from modular perspective, so we can be sure that when we connect all the pieces together we will have great results on the integration level as well.
We can use Karma that will allow us to test our code in all browsers, it uses PhantomJS headless browser, it does not require any UI, works in the background.
There are lots of various frameworks to help you out with the testing, Mocha, Jasmine, Tape, they are all great. Basically we want to have automated frontend testing with some form of testing library that has assertions.
Another important thing to remember when you configure your pipeline is that tests should not be a part of your build process. It should run in parallel with compilation and not interrupt it, we do not want to slow ourselves down, results of the testing will always be there in the console, but that should not delay the moment when the code is written till the moment when it is available in the browser.
We want our tests to output some sort of report so we could know what is our actual test coverage from the percentage perspective. Coverage works closely together with Threshold. Coverage shows us how many lines of code, functions, statements are covered with tests. We should be reactive in our work and cover parts of our code that output more issues first, so we would not hesitate to adjust our settings by priorities, there is plenty of tools that allows us to configure our threshold.
If we are trying to introduce testing to an existing project, then the threshold of 20% would be more than enough, if it is a brand new project stick with 50%. Eventually we will start writing less new code and start maintaining existing one, that is a perfect time to increase our threshold.
Of course all of that may vary depending on the project and nothing stops us from adjusting however our team feel comfortable. Threshold allows us to make sure that not only unit-tests are passed, but all of the new/adjusted functionality is properly covered within the threshold value, otherwise we just fail the build and send developer back to work.
What we also want to do is to somehow show the managers that quality of our application is on a decent level and also allow them to look at potential places that might cause problems, explain that to the client and adjust development priorities before it is too late. So to achieve that we can use HTML reports that will give us great and pretty output by the files with all the necessary statistic without overwhelming too much with the deep technical information.
Now the last part that we want to implement is the end-to-end testing. It will take a lot more time than unit or integration testing. That is the time when developers works closely with QA, they sit down and figure out all of the possible scenarios in our application.
Scenarios could be positive and negative, the simplest example is login with two scenarios, positive is successful login and negative is wrong credentials. To help us out with that we are going to use Selenium which is basically a web browser automation, it physically opens the browser and does what you tell him to do emulating user behaviour. We can extend it with Cucumber plugin which allows us to have behaviour driven development.
So right now we already have 3 quality verifications steps before the actual build and another will be code-review. There is not much to say about it. You should take a break for an hour and get back and review your code in a calm and silent environment, your colleagues should do it. The great tool with a lot of features is Upsource from JetBrains or Crucible from Atlassian.
We hope you get the idea of all those quality steps, building and maintaining client DevOps pipeline.
For the frontend we are concerned about the compiling step, we always do that first, it is fastest and brings lots of value for the least amount of work.
Complexity is our watcher so the code would not become too complex and stays maintainable.
Unit testing that will check that all of the pieces of our code work good.
Integration tests that will verify when we connect to something else it will also work as expected. Something that builds our application together so everyone can setup it.
And the deploy, as soon as we finished review phase and see the green light, deploy should happen automatically, we should be thinking about our colleagues reviewing our code, not the deploy process failing.
At the end of the day, our main goal is to get things done. We want to have sustainable code, we want to clearly understand the reason when it breaks, we want to lower the amount of technical debt that we have, yet again, not remove it completely, it is impossible, but definitely lower it as much as possible.
It is important so entire team was involved in this process and wanted to improve it because it directly affect the outcome of the product we are building. Each person in our team, developers, QAs, managers, will benefit from the results of our pipeline. Adapt DevOps, be productive and keep delivering high-quality product.