Before I read the book about Continuous Integration by Paul Duvall, Stephen M. Matyas III, and Andrew Glover, I thought that CI meant that we just create a deployment pipeline in which we can easily/automate deployment of our software. That and the fact that developers integrate continuously with each other.
I’m not saying that it’s a wrong definition, I’m saying that it might be too narrow for what it really is.
Thank you, Paul, Stephen, and Andrew, for the inspiring book and the motivation to write this post.
Several things became clear to me when studying CI. One of these things is that everything is based on the principle of automation. The moment when you start thinking about “I can’t automate this” that’s the moment when you should ask yourself if that is really the case.
CI is all about automation. We automate the Compilation (different environments, configurations), Testing (unit, component, integration, acceptance…), Inspection (coverage, complexity, technical debt, maintainability index…), Documentation (code architecture, user manual, schemas…).
We automate the build that runs all these steps, we automate the feedback we get from it, …
You can almost automate everything. Things that you can’t automate are for example Manual Testing. The reason is that the definition of manual testing is that you let a human test your software. You let the human decide what to test. You can in fact automate the environment in which this human must test the software, but not the testing itself (otherwise it wouldn’t be called “manual” testing).
That’s what most intrigued me when studying CI - the automation. It makes you think of all those manual steps you must take to get your work done. All those tiny little steps that by itself aren’t meaning much but are a big waste if you see them all together.
If you always must build your software locally before committing, could we than just place the commit commands at the end of our build script?
It’s kind of funny when people talk about “building” software. When some people say: “I can’t build the software anymore”; don’t always mean “build”; they mean “compile”. In the context of Continuous Integration, the “compile” step is only the first step of the pipeline but it’s sometimes the most important step to people. Many think of it as:
“If it compiles == it works”
When you check out some code and the Build fails (build, not compilation); that could mean several things: failed Unit Tests, missing Code Coverage, maximum Cyclometric Complexity, … but also a compilation failure.
In the next paragraphs, when I talk about a “build” I’m talking in the context of CI and don’t mean “compile”.
Continuous Building Software
Is your build automated?
Are your builds under 10 minutes?
Are you placing the tasks that will most likely to fail at the beginning of your build?
How often do you run your integration builds? Daily? Weekly? At every change (continuously)?
- Every developer should have the ability to run (on demand) a Private Build on his or her machine.
- Ever project should have the ability to run (on demand, polled, event-driven) an Integration Build that include slower tasks (integration/component tests, performance/load tests…),
- Every project should have the ability to run (on demand, scheduled) a Release Build to create deployable software (typically at the end of the iteration), but must include the acceptance tests.
Continuous Preventing Development/Testing
Are your tests automated?
Are you writing a test for every defect?
How many asserts per test? Limit to one?
Do you categorize your tests?
“Drive to fix the defect and prevent from reoccurring”
Many other posts discus the Test-First and Test-Driven mindset and the reason behind that; so, I will not discuss this here. What I will discuss is the reaction people have on a failing test from your build.
A failed build should trigger a “Stop the presses” event within the team. Everyone should be concerned about the failure and should help each other to make the build succeed again as quickly as possible. Fixing a failed build should be the responsible of the team and not (only) the person that broke the build.
But what do you do when the build failed? What reaction should you have?
First, write a test that exposes the defect by writing a test that passes. When that new test passes, you have proven the defect and can start fixing it. Note that we don’t write a failed test!
There are three reasons why you should write a test that passes for a defect (we’re using Test-Driven Development, right?):
- It’s difficult to write a failing test that uses the assertion correctly because the assertion may not be added when the test doesn’t fail anymore which means you don’t have a test that passes but a test that’s just not failing.
- You’re guessing what the fix should alter in behavior == assumption.
- If you have to fix the code being tested, you have a failing test that works but one that doesn’t verify the behavioral change.
To end the part of testing, let me be clear on some points that many developers fail to grasp: the different software tests. I have encountered several definitions of the tests so I merge them here for you. I think the most important part is that you test all these kind of aspects and not the part if you choose to call your Acceptance Tests, or Functional Tests:
- Unit Tests: testing the smallest possible “units” of code with no external dependencies (including file system, database…), written by programmers - for programmers, specify the software at the lowest level…
Michael Feathers has some Unit Test Rulz that specify whether a test can be seen as a Unit Test.
- Component Tests encapsulate business rules (could include external dependencies), …
- Integration Tests don’t encapsulate business rules (could include external dependencies), tests how components work together, Plumbing Tests, testing architectural structure, …
- Acceptance Tests (or Functional Tests) written by business people, define the definition of “done”, purpose to give clarity, communication, and precision, test the software as the client expects it, (Given > When > Then structure), …
- System Tests test the entire system, could sometimes overlap with the acceptance tests, test the system in a developer perspective…
Can you show the current amount of code complexity?
Performing automated design reviews?
Monitoring code duplication?
Current code coverage?
Produce inspection reports?
It wouldn’t surprise you that Code Inspection is maybe not the most “sexy” part of software development (is Code Testing sexy?). But nonetheless it’s a very important part of the build.
Try to ask some projects what their current Code Coverage is, Maintainability Index? Technical Debt? Duplication? Complexity?...
Al those elements are so easily automated but so little teams adopt this mindset of Continuous Inspection. These elements are a certain starting point:
- Minimum amount of Code Coverage (Use OpenCover, DotCover, NCover …)
- Code Duplication (Simian, NDepend, …)
- Technical Debt (NDepend)
- Code Style (FxCop, StyleCop, ...)
- [StackOverflow Discussion]
Can you rollback a release?
Are you labelling your builds?
Deploy software with a single command?
Deploy with different environments (configuration)?
How do you handle fixes after deployment?
At the end of the pipeline (in a Release Build), you could trigger the deployment of the project. Yes, you should include the Acceptance Tests in here because this is the last step before the actual deployment.
The deployment itself should be done with one “Push on the Button”; as simple as that. In Agile projects, the deployment of the software is already done at the very beginning of the project. This means that the software is placed at the known deployment target as quickly as possible.
That way the team get as quickly as possible feedback of how the software act in “the real world”.
When you deploy, build, test, … something, wouldn’t you want to know as quickly as possible what happened? I certainly do.
One of the first things I always do when starting a project is checking if I (and the team) gets the right notifications. As a developer, I want to know as quickly as possible when a build succeeds/failed. As an architect, you want to know what the current documentation of the code base is and what the code looks like in schemas, as project manager you may want to know if the acceptance tests where succeeded so the clients get what he/she wants…
Each function has its own responsibilities and its own reason to want feedback on things. You should be able to give them this feedback!
I use Catlight for my build feedback, work item tracking, release status... This tool will maybe in the future support pull request notifications too.
Some development teams have an actual big colorful lamp that indicate the current build status. Red = Failed, Green = Successful and Yellow = Investigating. Some lamps go more lighter/darker red if the build states in a "failed" state for too long.
Don’t call this a full-CI summary because it is certainly not. See this as a quick introduction of how CI can be implemented in a software project with the high-level actions in place and what you can improve in your project automation process. My motto is that anything can be improved and so, be more automated.
I would also suggest you read the book I talked about and/or check the site of Thought Works for more information on the recent developments in the CI community.
Start integrating your software to develop software with lesser risk and higher quality. Make it as automated that you just must “Push the Button” – The Integrate Button.