A few weeks ago I posted about my team’s work toward moving our application teams to using Argo CD for Kubernetes deployments. The work entailed a months-long set of changes to our deployment tool to gradually push users from the old way to the new way. At the time of that post, before our final(-ish) transition to requiring Argo CD usage, I’d just sent out a message designed to sound a little scary for those who hadn’t been paying attention, to get them moving. Despite a little bit of negative reception, I was hopeful that the transition would nevertheless go smoothly.
For those who aren’t familiar with the term, refactoring is the process of rearranging the innards of a body of code without changing its external behavior. This isn’t a discussion about refactoring techniques, though, but instead about how refactoring plays into code reviews.
Coders have differing opinions about seemingly inconsequential aspects of their work, like code style.
whether to use spaces or tabs for indentation
where to introduce blank lines
how to format comments: end-of-line, single line, block
whether to comment at all
which bracket style to use: GNU or 1TBS or Java or Stroustrup or …
how to order import / include statements
how long lines may be, and where to break them
As you gain experience (read: get older), you realize that what’s important isn’t which choice is “correct”, but using the same choice consistently across your team’s code. (If you are a lone coder, it’s your world, go nuts!) Establishing a house style makes it easier, mentally, to read over all of the code, regardless of who wrote it.
Another option could be to send an event about the error to a collection system. That’s like logging, but the event could store structured details about the context of the call beyond the error string. (Sorry if my indentation isn’t right.)
What if you want the event to include more about what the function was doing at the time? For example, suppose the function was looping through a list, and you’d like the include the current item in the event. You don’t know that by the time the function has returned.
In this little series of posts, I’ll describe techniques that you, humble coder, can use to make your code reviews smaller. Why do that? Justifications abound, so I’ll just link you to a couple of good articles.
In my own writing, I’ll get into the details of how to accomplish the goal of smaller code reviews. I will use git terminology, but the ideas should apply to any version control system where human review is involved.
I’ll start off with one of my favorite tricks for forming a small code review, which is what I’ll call here a “no-effect” change.
Suppose that you have a set of changes spread across a ton of files. All of the changes are related to one idea (great!), but there are simply a lot of them. Find a small set of files that could be split into their own review and do so, even if merging them doesn’t change program behavior.
A big part of all-hands meetings (not the only part, to be fair) is leaders trying to convince the rank and file that they should do such-and-such things because reasons. Aligning the entire org and all that, you know.
I tend to focus on the words that speakers use. In the arguments for doing new things, I often hear sentences starting with “we need to”. For example!
We need to reach profitability by the end of 20xx …
We need to ensure our product has the highest quality …
We need to cut our expenditures by y percent …
We need to function as a single, unified team …
Usually there are reasons why “we need to” do these things, and they are often sensible. So the goals are not silly. But, I’ve realized that in all of these cases it’s not true that “we need to” do any of these things.
In my new job (started middle of 2021) I’m gaining a reputation as quite the bash hacker. It’s possible that I am turning into one of those sage old-timers that I’ve heard legends about, although it doesn’t feel like it personally. Still, it’s nice that the “old” tools like bash, sed, and awk seem to be inspiring awe and respect when they are used … well, appropriately really. I guess there is a reason why the elegant one-liners for those tools are called “incantations” even when they are just ordinary usage. To folks unfamiliar with the tools, they can look magical.
As a bash nerd, then, you know I’ve spent a non-trivial time finessing my .bashrc file. Near the beginning of my new job, in the middle of transitioning my old development environment to my new laptop, I realized just how big my standard file had gotten. So, I devised a better organizational scheme for it using the quasi-standard of stowing scripts in a “.bashrc.d” directory and sourcing them. After fiddling a little more with it today, I’ve posted it for fellow bash fans to use if they like.
By the way, I’m fully aware of the idea that you should abandon shell scripts when they get too complicated for a better language, like Python. I have considered this idea and decided that it’s good advice for a lot of people and situations, but I’m personally fine with continuing to use bash in unholy ways. Sometimes you’ve got to embrace your dark side a little.
I have been working within a monorepo (that is, a gigundo Git repository with lots of separate projects in it) and participating in maintaining its test health. This includes unit tests.
After every commit to the repository, all of the unit tests across the whole thing are run by our CI system. The idea is that the entire monorepo should always build successfully.
Also, for every pull request (proposed change), all of the unit tests across the whole thing are run by our CI system. Same idea as above.
It isn’t feasible for individual contributors to do this themselves, as the monorepo is just too big and varied. I don’t even try to compile the entire thing myself, let alone run all of its unit tests. All I do, and expect from others, is work within the projects that are relevant to the task at hand (and their dependencies), including making sure that existing and new tests pass.
We really need existing tests to be reliable. A flaky test that fails randomly once in a while crops up in random test runs in the CI system. For a PR test, it’s annoying because that test failure seldom has anything to do with the proposed change. For a post-commit test, it’s just noise. In both circumstances, someone tasked to investigate test failures has to spend time discovering that the flaky test is at fault – a false alarm. Additionally, someone proposing a PR has to re-run tests so that, hopefully, that flaky test doesn’t flake out again. This all produces friction in the development process.
Say you have multiple flaky tests. Now, a test run may fail if one or more of them fails. If there are only a few flaky tests, the effect isn’t much worse than having one. But, as the number of flaky tests grows, probability starts to be your enemy.
You’ve heard of the iron triangle, right? Of course you have, I think they teach it in coder kindergarten now. Since I couldn’t find any freely-licensed images of it in less than a minute, I drew my own.
As a refresher, you can only pick two of the three points of this triangle. The third one is therefore chosen for you. Well, that’s how it’s supposed to work.