I have been working within a monorepo (that is, a gigundo Git repository with lots of separate projects in it) and participating in maintaining its test health. This includes unit tests.
- After every commit to the repository, all of the unit tests across the whole thing are run by our CI system. The idea is that the entire monorepo should always build successfully.
- Also, for every pull request (proposed change), all of the unit tests across the whole thing are run by our CI system. Same idea as above.
It isn’t feasible for individual contributors to do this themselves, as the monorepo is just too big and varied. I don’t even try to compile the entire thing myself, let alone run all of its unit tests. All I do, and expect from others, is work within the projects that are relevant to the task at hand (and their dependencies), including making sure that existing and new tests pass.
We really need existing tests to be reliable. A flaky test that fails randomly once in a while crops up in random test runs in the CI system. For a PR test, it’s annoying because that test failure seldom has anything to do with the proposed change. For a post-commit test, it’s just noise. In both circumstances, someone tasked to investigate test failures has to spend time discovering that the flaky test is at fault – a false alarm. Additionally, someone proposing a PR has to re-run tests so that, hopefully, that flaky test doesn’t flake out again. This all produces friction in the development process.
Say you have multiple flaky tests. Now, a test run may fail if one or more of them fails. If there are only a few flaky tests, the effect isn’t much worse than having one. But, as the number of flaky tests grows, probability starts to be your enemy.Continue reading