Programmers’ beliefs

Programmer’s beliefs

Many of our day-to-day decisions when working with software are based on subjective opinions and beliefs rather than objective truth. Are you even aware of the software-related views you hold? What if you could change them on a whim? Are your beliefs useful?

When you identify a belief of yours, ask: how did you come to this belief? There must have been some experience or something you read or heard that gave you the idea in the first place. What if something else happened?

Here are some beliefs. There are a lot of people on both sides. I’m not aware of objective criteria that would show a clear winner in any case.

  • Testing is good vs. testing is not worth the time.
  • Static typing is good vs. static typing is bad (or not worth it).
  • Estimates are worth the time, or they aren’t.
  • Agile (or iterative approach) is good, or it is bad.
  • Performance matters, or it mostly doesn’t matter.
  • Working for big companies is better vs. working for small companies is better.
  • Pair programming is good vs. pair programming is bad
  • Object-oriented is the best paradigm, vs. functional is the best

Can none of these be correct in absolute terms, but are they more useful in some contexts?

Or maybe there are objective winners, but we don’t have a strong consensus about it for some reason.

It might also be that there is consensus, but it’s objectively wrong.

Imagine with sincerity holding the other side of the belief. What would change in your behavior? How would it affect your different beliefs? What are the traps it might create?

Beliefs are sticky. Once acquired, they change how we perceive experiences, continually strengthening whatever you already believe.

Beliefs limit what we can perceive. For example, if you believe that automated testing is good and a bug escapes the tests, you will perceive it as an exception and a reason to write more tests. On the other hand, if you believe that testing is mostly a waste of time, you will see the same bug supporting your belief.

An exercise I propose is to make a strong case for both sides. Then, write it down. It requires you to question your belief at least for a while.

Here is my attempt:

  • Testing is good vs. testing is not worth it
  • Static vs. dynamic typing
    • We should use everything we can to ensure the correctness of our code, and static types are one of the best tools in that. They render entire classes of bugs impossible or almost impossible. They make for better tooling, like refactoring support in IDEs. They help achieve great performance.
    • Static types muddy the clarity of code. Leave the type checking to the runtime; focus on what your code means; types are not adequate for that. They give a false sense of security. Some types of tasks (like metaprogramming) become unnecessarily hard.
  • Estimates
    • Estimating is very hard in the context of software, and it doesn’t bring that much value. It would be better to focus on estimating value instead.
    • Estimating is possible when done correctly. Refer to How to Measure Anything for an example model (in short: estimate the range that you are confident 90% and use math tools to combine multiple estimates).
  • Agile
    • Incremental methodologies like Agile, when done right, are the most reasonable ways of working in most cases.
    • Incrementalism prevents serious progress. We need to take a deep look and carefully plan what we do.
  • Focus on performance
    • We ought to be careful about the performance of software. We can’t rely on surfing the Moore’s law’s wave forever. Most software nowadays is actually slower than it used to be 30 years ago!
    • Make the system run correctly first, and only then consider tweaking performance if really needed. If something is automated, it’s usually orders of magnitude faster than a human anyway. “Premature optimization is the root of all evil.”
  • Working for a big company vs. startup
    • Big companies pay better. Many exciting problems require a scale that can only be found at big companies. See Dan Luu’s article on startup trade-offs
    • Big companies are doomed to work in the old ways, and if you want something novel, you need to look to startups. At small companies, opportunities are not restricted with red-tape yet. Only in a startup do you get a good shot at having a significant impact on the company. See Paul Graham’s essays.
  • Pair programming
    • Working in pairs helps get better code quality and prevent mistakes. It aids knowledge transfer.
    • Pair programming has low ROI because it halves productivity in some sense. It can prevent careful thought that is only possible in solo work. Pair programming is awkward and stressful for many people.
  • OO vs. FP
    • Object-oriented programming became dominant for a reason. There is an extensive body of practice and educational materials that teaches how to design OO systems well. OO is a big part of most popular programming languages, and we should take advantage of it.
    • Functional programming is superior because of its simplicity. It is more natural to people. It’s not the 1950s, and we no longer have a reason to avoid FP because of performance. FP is like mathematics, pure and beautiful.

The third way

It might be that neither side of an argument is truly right or wrong. Nuance and context matter. Or maybe not! It might be that one side of the argument is actually right.

I argue that you have a better chance of finding the most useful belief if you put serious effort into seeing all possibilities instead of limiting yourself to whatever convinced you the first time. But it might also be worth the time to look for alternatives. Is there a false dichotomy in how a choice is stated? Can one side be more right in some contexts and the other one in different contexts? Is there a third way of any sort?

Here I propose arguments for “the third way” for the same beliefs I mentioned before.

  • Testing:
    • Automated testing is one of the tools that aid in making good software. You still need informal reasoning and manual testing for overall excellence.
  • Static types
    • Static or dynamic typing isn’t the most significant difference between programming languages anyway. Some problems can be more natural to model with either dynamic or static typing.
  • Estimates
    • Estimating is valuable sometimes. Figure out the value of estimating in your context before engaging in it.
  • Agile
    • In practice, a mix of careful design and incrementalism is the best. Some projects might benefit from longer planning, while others are best done incrementally.
  • Perf
    • The value of performance can be estimated, and it can vary from system to system how much investment is reasonable.
  • Big company vs. startup
    • Reject being an employee at all. Be a consultant and choose clients, not employers.
  • Pair programming
    • Pair programming should not be a must or a must not. Programmers need to be free to work in pairs when they judge it will be more effective, but never forced to pair. It should be normalized to say that you like pair programming or not without being seen as antisocial.
  • OO vs. functional
    • Different problems are better modeled with different paradigms. Often either OO or FP can be used to solve a problem well. You can also mix both – see imperative shell, functional core.

Did you find any other programming beliefs that you hold? Let me know!

The worst tool makes easy things easier and hard things harder

Most tools (libraries, runtime dependencies, developer tools) that become popular make things easier; that’s why we use them!  Their popularity is based on simple cases because they are most apparent when deciding on adopting them. After all, it’s hard to consider complex cases and trade-offs without investing time into learning that tool.

But some of these tools make the complex cases even more difficult or otherwise worse off.

It’s easy to tell that easy things get easier; it’s hard to know that hard things get harder until it’s too late. If you don’t know the tool yet, you can’t judge the long-term effects. By the time you have spent a lot of time with it, you are heavily invested in it; reverting would be costly, and you would have to admit being wrong.

An example from my experience: at one company I worked at, we migrated a chunk of a shopping system from a “traditional” ACID SQL database to a NoSQL one that shall remain unnamed. The benefits were instantly visible: common operations were fast, the interface library we used was very pleasant to work with, it all seemed very nice.  Only after some time did I find out that many writes were being lost, seemingly at random.  After a lengthy investigation, I found out that we need to change our approach to writing collection types because of the inherent properties of that distributed NoSQL database (if you are experienced with distributed systems, it’s probably obvious to you what we did wrong).

I had problems getting anybody to acknowledge the source of the problem; that database was a heart of a new shiny system that the company was very proud of, and promotions were tied to its development. The solution I proposed would basically wipe out all the benefits of using that database in the first place.

Because I left shortly after, I don’t know if they ended up adopting my solution, finding a different one, or kept ignoring the problem.

Ultimately, a tool that brought convenience and other benefits to the most obvious cases (read and write of basic data types) was also responsible for complicated, more complex cases.

Another example is a popular query system that is presented as an alternative to REST. The base cases look neat in examples. It comes with a library that is a breeze to use. But at the edges, things get worse – caching becomes more complicated, inspection tools built for HTTP often become useless, control flow in complicated cases gets harder to manage. The thing that benefitted the most from the tool – the fairly efficient read of nested data over HTTP – was not the major problem in the first place.

Both of these technologies have tons of devout users. Naturally, they are happy to argue about the benefits of these tools – and they are correct, there are benefits! – but the arguments revolve about the most common cases, never the problematic edge cases.

When thinking about performance, it makes sense to optimize for common cases. But with development time, I propose to adopt an inverse approach – pay special attention to how edge cases will be affected because most development, debugging, and bug-fixing time is going to be spent there.

How to do it? Explicitly ask: what trade-offs does this technology make? How easy is it to inspect things deeply when something goes wrong? If you need help, will there be anyone to provide it?

Do something different for a week

I get bored rather quickly in the programming context. Here’s a trick I sometimes do to keep things interesting – I pick one aspect of my development process and experiment with doing something different.

I compiled a list of things to try. Some of them I already tried, others not.

In my experience, the results are often surprising; things that seem outlandish work out just fine, or something that I had high hopes for ends up being “meh.”

The meta lesson for me here is that I should not praise or bash a practice without giving it a solid try.

I recommend trying things out for at least a week, preferably two. In most cases, you’re going to experience enough nuance to make an informed decision whether to incorporate it into your regular workflow.

Some of these should probably not be tried for the first time in a professional context where you work with other people in the same code repository, while others can. Use your judgment.

Some of these can be more or less relevant for your programming language or environment (web frontend, backend, anything).

Make components as small as possible. Make the units of code (functions / classes / components / whatever relevant for you) as small as possible.

Isolate examples of code in sandboxes. If you work with web frontend and encounter something you are confused about, recreate a minimal version of it in or jsfiddle. This makes it easier to understand what’s going on.

Do new kinds of tests. If you haven’t tried unit testing, property-based testing, performance testing, or any other type of testing you can find, try it now.

Use different way of splitting files / functions / components. Question your way of organizing code.

Draw everything before typing. For every unit of code, visualize it somehow using pencil and paper.

Call what I’m doing before typing anything. Inspired by Kent Beck’s Implementation Patterns.

Measure performance of a feature, improve it. Pretend you’re on a more performance-restricted platform than you are. E.g., in web frontend, use browser’s devtools to limit CPU and internet speed.

Implement twice; pick the better version. For every non-trivial feature, try implementing it twice, with different approaches. Only after doing both evaluate which is better in your context. Inspired by John Ousterhout’s A Philosophy of Software Design.

Git reset after a day. Try short-lived branches (

Record screen and watch yourself. Record your screen for 30 minutes or more. Then watch it. Are there any obvious improvements you could make in your workflow? Are you surprised by anything?

Pretend to be streaming. Talk while you program as if you were doing recording a screencast for other people.

Use debugger a lot. If normally you do not use a debugger, try using it for a week every time you feel confused by what your code is doing.

Avoid the debugger. If you usually use debugger a lot, try avoiding it for a week. Does your approach to writing code and debugging change?

Automate everything you can in your editor/shell scripts/your main programming language (pick one at a time). If you see any activity that is either awkward or repeated a lot, see if you can automate it, even if it doesn’t seem “worth” it. Are you surprised by how much time it took or how easier the task became? Did you learn anything new about your editor/shell scripting/your main programming language?

Work without any editor plugins. If you use an editor with plugins like vim, emacs, or vscode, try removing them for a week. Did you learn anything new? Will you use the same selection of plugins after this challenge?

Make things extra consistent. Pay extra attention to things being consistent (this is somewhat subjective)

Use different windowing strategies. If normally you use multiple displays, try working with just one for a week. Or try using as small terminal/editor/browser windows as necessary.

Use a new way of visualizing information or program flow. Examples include excel spreadsheets, OmniGraffle, or dot.

Have you tried any of them? Do you have ideas for more? Let me know!

Good efforts preserve bad systems

For many systems, constant limping is the status quo. Extra efforts of well-meaning employees often prevent the system from failing completely; it also prevents it from improving.

By a system, I mean both a “software system” and the company as a whole.

A software system can be limping by being unreliable, slow, and hard to work with; a company can be limping by barely breaking even, being full of dysfunction, or being in regular crisis mode.


In every work environment, some things are valued more than others. By “valued,” I mean here what is rewarded, not merely given lip service. These incentives shape the system, and the incentives are never perfect because management always has a limited understanding of the system it manages.

I think most workers are aware of that. They know that there is a dissonance between the work that “should” be done (in a sense, the work most helpful to the system) and valued work. Different people will respond to it slightly differently, but for most, it will be a mix of doing both. Most people aren’t purely cynical and ignore all but what’s valued, but most don’t ignore the incentives completely.

In many cases, doing only what is valued by management would not allow the system to survive. For example, if management values new features more than bug fixing, and everybody responds to that incentive perfectly, the system would crumble. Thus, the status quo is only possible because people put extra effort into doing what’s needed for the system to survive.

That extra effort is frustrating because it is:

  1. Tiring, especially when done constantly
  2. Rarely rewarded

However, it is usually seen as something positive, heroic even; going against the bad system is praised by those who see through it.

I disagree with that notion.

By doing that, we are not only solving problems; we are also preventing the same problems from being acknowledged by the management, and this leads to bad long-term effects.

The same problems will come again and again, without any additional resources or appreciation for solving them.

For example, if a company doesn’t really value reliability and underpaid sysadmins are working extra hours to patch the system with ad-hoc fixes, this will continue until the uptime goes down enough for management to notice. If the uptime never goes to that critical level, nothing will change, because why would it? On the other hand, if the number of embarrassing outages reaches some threshold, management will recognize that systemic changes need to happen; it might be hiring more sysadmins, adopting new practices, or rewarding prevention more than before.

We’re not ready yet

Of course, the big problem is that people don’t want to be seen failing; it is often punished in some way (it might be harder to get a promotion or a raise), and it just plain feels bad.

Unless there is a cultural change in how we perceive failures, I don’t expect anybody to act differently.

What would it take to change the culture? First, we need to learn to embrace failure.

Embrace failure

The solution is to make failure safe, or at least non-catastrophic, both individually and systematically.

On a personal level, it is hard to admit a failure even without incentives aligned against it. Ego can hurt. You need to shift the framing from failure being bad to failure being a valuable lesson. You also need cold reasoning to recognize whether the failure should be attributed to your mistake, to bad luck alone, or to the system you’re in.

On the company level, you must ensure that people will not be punished by bringing bad news. You need the shared attitude of being ready for improvement; too often, it is assumed that the company is already an optimized system and that the failures should be attributed to mistakes of individuals or luck.

Get rid of the culture of internal competition; it invites pushing blame around.

Have you experienced a good (or bad!) systemic change after a visible failure at your company? I would love to hear about it.

Preferring employers with hard interviews is a bit silly

There is a thing that I’ve heard from at least 2 of my programmer friends that I find a bit irrational.  Both of them were looking for a new job, and they chose the employer with the hardest interviewing process that made an offer.

On the one hand, I can see why they would feel that those processes were more valuable – they put more energy into it, so it just feels like by investing more, they should get better jobs.

However, I don’t think there is a significant correlation between the difficulty of interviews and the “quality” of people working at that employer.

In my experience, relying on anything told to you in an interview is risky. Interviewers have incentive to misguide you – they are basically promoting their company.

A better way

A better strategy is to go for the larger number of prospective employers and choose the best based on salary and information from less biased sources. Trusted friends that work or used to work there are the best source. Even writing to a random employee on LinkedIn is less likely to be as biased as HR.

I think it’s a better bet to interview three companies with a single-round interviewing process than one company with a 3-round process.

And if you’re a programmer, you are probably more limited by your time than by the number of available interviews, so it makes sense to optimize accordingly.

If you find out that the prospective employer uses a complicated interviewing process, say no, thank you.

Hard interviews have consequences, but maybe different than you expect

Hard interviews are certainly selecting for something. But I doubt it is skills, experience, or even working under stress (working under pressure in actual work is different from a hard interview).

What it is usually selected for is two things:

  1. First impression – how well you match their stereotype of a programmer in how you look and talk.
  2. To what extent is access to your knowledge hampered in an artificial, stressful situation that an interview is.

Both points contribute to a workplace becoming a monoculture and cause missing out on many valuable candidates without actual merit. And the more involved the interview, the more bias it invites.

What to do?

As a candidate, ignore companies with hard interviews.

On the altruistic level, by not working with them, you avoid contributing to the problem of biased interviews.

And in purely pragmatic terms, they are just not worth your time if you have others to choose from.