The software engineering Stack Exchange has a mostly dead (not judging) community blog associated with it. I found this article on controversial programming opinions there, via Hacker News. Because opinions are like, um, other things that everyone has one of, here are my takes on the list.
Each opinion is presented in italics, and I’ll respond assuming that you’ve read the original argument for the opinion.
Programmers who don’t code in their spare time for fun will never become as good as those that do. Right out the gate, I have to disagree, for two reasons. One is that we already spend a lot of time in our jobs, enough to get as good as you can get. The other is that non-programming activities will expand your abilities in ways that only programming can’t. A weak corollary of that is that you need to actually have spare time to recuperate from anything in order to stay effective.
Unit testing won’t help you write good code. Here, I say that unit tests are necessary for good code, but not sufficient. Besides merely “mak[ing] sure that code that already works doesn’t break,” which is absolutely needed for “good code”, unit tests document how the code should work. And because people don’t comment, that’s critical.
The only “best practice” you should be using all the time is “Use Your Brain”. This is mostly true. Mindless application of patterns, frameworks, etc. is useless, but strategic use of them is very valuable. Just because something is new and cool doesn’t mean it’s trash. I mean, most of the time it is, but once in a while it’s not!
Most comments in code are in fact a pernicious form of code duplication. You know what? Sorry for my French, but just fucking comment your fucking code. My controversial opinion is that people who argue against commenting are themselves too lazy to do it and looking for excuses not to. We all know by now how to do it properly, i.e., without just restating the code, so be disciplined and do it. There’s never time to get to the ideal of self-documenting code in the real world.
“Googling it” is okay! YES! It is! In the real world, we all use references, and search engines are the tool for our time instead of a vast bookshelf. When I give programming interviews, I even search for the candidate so they can concentrate on the problem at hand.
Not all programmers are created equal. Well, true, but this is besides the point. People bring more than just their “programming ability score” to the table in the workplace. Most importantly, someone with a “lower” ability may adapt to and conquer a problem better than someone with a “higher” ability, because of other factors. This is part of why diversity matters.
I fail to understand why people think that Java is absolutely the best “first” programming language to be taught in universities. The commenter here is coming from the “Real Programming” camp of C/C++ programmers, where “experience in debugging memory leaks” is important. That perspective is easily shown wrong, e.g., “you really should learn assembly before C”. Anyway, there is no best first programming language, but some are better than others. Scratch is superb. Gosh, what does a hardcore C programming think of that?
If you only know one language, no matter how well you know it, you’re not a great programmer. This is mostly true, although it depends on your definition of “great”. It’s better said (and the author does say this) that you will never get as good as you can be without learning other languages. Even just learning their perspectives gives you more power in your language of choice.
It’s OK to write garbage code once in a while. This is true, and you need to swallow your pride when you do it. You have a job to do. To paraphrase a former boss of mine, you don’t have to innovate in every space you work in. (And then, when you bang out that trash, fucking comment it.)
Print statements are a valid way to debug code. Yup! I myself try to start with logging (at ERROR level, so it definitely shows up!), but if it’s annoying to do that, I print and go. It’s more important to keep your train of thought than take the time to launch and run a debugger.
Your job is to put yourself out of work. The author’s meaning here is to write code that someone else could pick up easily if you are gone for any reason – which is totally correct. I see a deeper meaning. Your job is to make anyone’s effort on your work no longer necessary, so that you all can proceed on to more important things, advance your state of being.
Getters and Setters are highly overused. This smacks of more lazy programmer ideology. We can all type really fast now, or else use IDEs that fill the methods in. Just do it and move on. You get encapsulation and thread safety (when you need it) really easily later.
SQL is code. Treat it as such. Absolutely true. The author here is focused on maintaining good style in the SQL, but more fundamentally, you need to check in and version control your SQL. I’ve used Flyway in my work, it’s great.
UML diagrams are highly overrated. Except for sequence diagrams, which I love, this is pretty much true. UML is for heavy process, top down projects that generate useless documentation like UML diagrams that no one ever reads again.
Readability is the most important aspect of your code. Even more so than correctness, the author says, but there I must disagree. If broken readable code must be made “unreadable” to be correct, then do it. And then, fucking comment it so someone later can make it readable again, and still correct.
XML is highly overrated. True. The blog post is from 2012 referencing a question from 2009, so this opinion is pretty much moot now. JSON, YAML, protocol buffers, and DSLs rule most of the areas formerly ruled by XML.
Software development is just a job. True, and I have trouble staying in perspective myself about this.
If you’re a developer, you should be able to write code. Again, true. This refers to interview candidates who couldn’t code themselves out of a paper bag. I haven’t seen this in my own experience, to that extent, but I have been disappointed with the skill level of some candidates. Being able to do simple development shows that you have a certain mindset which is critical, moreso than fluency in any particular language.
Design patterns are hurting good design more than they’re helping it. Maybe. I think nowadays most coders don’t think about them. In my opinion, they are valuable for two reasons: as thought-out, reusable solutions to common problems; and as a common vocabulary for how code works. That said, mindless application of them usually does more damage, like using the same brick for every part of a building.
Less code is better than more! Correct. Smaller code volume is easier to understand, debug, improve, and it runs faster. All wins.
I suppose I should conclude with a “controversial” opinion of my own, so here’s one. I actually stated it above, but I’ll reduce it down.
Most programmers who are philosophically against commenting are just too lazy to comment their own code. They object to all the known “wrong” ways that others comment as an excuse not to comment in useful ways, like to explain why the code works as it does, to record the thought process behind it. If commenting is such a plague upon our existence, then (and tell me if I’m wrong!) why do all common, professional programming languages include comment support in their syntax?