Becoming a Better Programmer by Tightening Feedback Loops
May 2022
I’m interested in strategies to improve deliberately and continuously as a programmer.I wrote up this post as a rough working note to get thoughts on it from others. I’ve thought about this on and off for the last two years, and have talked to ~25 experienced programmers about it. Mostly, it feels like “programmer training” is not a topic that is taken very seriously, probably because this skill is hard to quantify. As there are no established strategies, the potential returns to thinking about this topic increase.
The best general strategy I’ve come up with so far is trying hard to get tight feedback loops.I got the initial idea from a conversation with Bill Zito. In my mind this makes sense if you consider that programming is an engineering discipline and requires lots of structured experimentation. Running experiments is more successful if you get feedback on them faster.Examples: Debugging is faster and much more enjoyable if your test fails after 1s, compared to an integration test that fails after 20m. If you want to experience what it’s like to drive a car, but with a long feedback loop, try steering a boat.
Concrete strategies
Coding
- Implementing a minimal-viable version first: When being assigned a big task, I try to write up a small, messy, minimum-viable version of it. This reduces the time to me figure out whether the approach has a chance of working. I’ll also open a draft PR, for the maintainers to tell me whether this was roughly what they were expecting.
- Splitting large PRs for merging: People prefer reviewing smaller PRs, and I can implement the proposed changes and ask for a re-review on individual PRs faster, compare to a single thousand-line change.
- Test-driven development: Once the test is written, you have a quick way to gauge whether your current code is working or not.
- Code reviews on sole-maintainer Open Source projects: Even if you are the only maintainer on a project, you can ask customers or friends for a code review. I’ve done this once so far.Happy to pay it forward here! Feel free to send me an email with your (ideally Python) project. If the project is not that big, it won’t take more than a few hours. Plus I had built the project from the bottom up, giving many chances for fundamental criticism.
- Using the right tools: Clion detects C++ compiler errors before running the build. Compiler-explorer is the fastest way to write, compile and tweak snippets for debugging performance issues or build errors. I bet there are many more examples here.
Learning
- Implementing small projects instead of reading books: With books, it takes too long from reading about a topic until getting the first signal on how much of it I internalized. Small projects are more favorable in this regard. Implementing a basic version of some tool and testing whether it works correctly takes a few days or less, ideally.Examples: For deep learning, I’ve done this by implementing backpropagation from scratch using plain Numpy.
- Flashcards for introducing feedback loops where there are none: When learning something, I write flashcards to get a very tight question-answer-feedback loop.I’ve written about this before, on Twitter. I use this for learning algorithms as well as remembering research papers and tracking common bugs. It’s a good way to measure how much of the topic I was reading I properly understood. Nice side effect: I can still talk about the fundamental contributions of most papers I’ve read two years ago.
- Insisting on regular performance reviews at work: I think this is commonplace at most corporates, but not at every startup. I found it works best if people continuously write down good and bad behavior they notice when interacting with their colleagues, for example in a private text file. This removes most of the recency bias if the performance review happens only every couple of months.
General productivity
In this area, there’s often no signal regarding whether some “productivity-enhancing” technique works or not, which is why I mostly avoid them. Exceptions:
- Rescuetime: It tracks which window is currently focused, in the background and automatically. Good for detecting how much time is wasted on distractions.
- Recording your screen while coding: I’ve heard others have had success with this, but I never seriously tried it myself. Like using flashcards for learning, screen-recording introduces a feedback loop where there previously wasn’t any signal, which may be the most impactful intervention.
Interviewing
Existing sites like HackerRank or Leetcode already pull the feedback loop very tight for you, by giving you a way to submit code and run tests immediately. Further ideas:
- Doing mock interviews as early as possible: For example through Pramp, which lets you stage mock interviews with other programmers that are currently preparing for applications.
- Interview with your second-tier choices earlier: This will give you a good signal quickly since the interview processes are often similar between companies.Ideally, you should record yourself during the interview to be able to review it later.
Situations where this strategy fails
Bad results resulting from a misleading signal
This is the most important counterpoint to keep in mind. Tight loops will allow you to progress faster towards a state of “high reward”. However, this end state depends strongly on the person/process that is giving you the signal. It’s vital to consider whether you want the result to look good in the eyes of the person that you’re asking for feedback. Else you’ll be pushed in the wrong direction. Examples:
- Startups: You shouldn’t ask your friends nor VCs about feedback regarding your startup idea, but your customers.I’ve been told feedback loops play a central role in the famous ‘Lean Startup’ book, but it’s been many years since I last read it.
- One-sided PR reviews: Asking multiple people for PR reviews (not necessarily on the same PR) can protect your coding style from veering too strongly into one biased direction. Example: considering maintainability vs performance.
Faster feedback can be worse feedback
There are cases where getting feedback faster will make the feedback worse, requiring a trade-off. Examples:
- Software architecture: To get a good signal on your software-architecture skills, it may make sense to spend some days/weeks building it out, instead of asking for feedback as early as possible. The upsides of some architecture might not be visible to others early on.
- Performance reviews:: If you schedule them too often, people won’t focus on the big, important issues, but will bring up small and often unimportant actions from the recent past.
Fast feedback may prevent you from learning
I’m uncertain if this exists, but it sounds plausible: When the feedback is fast, you never actually learn how to avoid the error, but instead, always rely on the experimentation loop. Examples:
- Spell checking: If you use a spell-checking program, you may never learn the correct spelling yourself, costing you time in the long runAdditional consideration: A spell checker may distract you from improving the content core of what you’re writing. This is related to my ‘misleading signal’ counterpoint..
Conclusion
Feedback loops feel like a good way of approaching the “improving as a programmer”. I’m still actively thinking about this, so if you’re interested in this topic send me an email and let’s have a chat!
Further links
- Cedric Chin has a great post called The Problems with Deliberate Practice, where he describes why the “deliberate practice” framework is hard to apply in areas like programming, that have no established training methods.
- Simon Eskildsen’s Napkin Math: By doing back-of-the-envelope calculations about system design, you can explore a vast space of design options quickly, making it easier to get order-of-magnitude improvements.
Thanks to Uwe Korn, Karl Lorey, and Bill Zito for helpful discussions around this topic.