Running CI tests with two steps
Gitlab, Github, Travis, and other all are CI pipelines that are intended to run all sorts of integration tests as really granular actions. The trouble is you end up writing an absurd amount of YAML-logic just to link them together.
Writing CI jobs with directed acyclic graphs in YAML is a pain, but it’s not the worst part. The worst part is that in order to make sure each job runs fast, you need to do a strange amount of optimization. Optimizations like exposing your artifacts to the next step, or running setup scripts for each step, or just rebuilding artifact, or starting services. It adds up to a lot of configuration.
I’ve found it much easier to reduce all of these problems and complexities into two tasks: test it, do what you need to run it faster next time. Basically, the first time you build something it might take a while, the second time shouldn’t.
The entire build process is just:
- Build targets and test.
- Install dependencies, build everything for next time. Then build, tag, push image.
The second task only runs if the first succeeds.
This means that you collapse all of your CI work into a single Docker image. It runs DB processes, integration tests, and more. It’s more or less your entire architecture/stack in a single VM. The downside is you end up with larger Docker images (in my case a couple of GBs), but these don’t take too long to pull.
The upside is you are forced to simply your setup.
In my case an image of 5.6G is pulled in couple seconds, then it runs 5 minutes of building targets, and maybe another 4 minutes of tests. Which isn’t too bad for a couple dozen libraries.
The main benefit of this two-step CI process is that all of your tests are easy to reason about. They run with a simple command, in order, on one VM.