First steps after joining a project

I’m a freelance contractor, and I change projects rather often. I noticed that how the beginning of my involvement starts has a big impact on the whole engagement.

Part of that is just being able to contribute early and use the grace period (nobody’s surprised if you ask “stupid” questions while you’re still onboarding), and part is the attitude you build and the impression you make. If I start well, It’s easy to continue well.

I learned not to rely on onboarding led entirely by the welcoming team members; they are always well-meaning, but there are too many things that are in the “elusive obvious” area for them; they won’t mention some things because they aren’t even aware that they know them. Here I’m attempting to address mostly these aspects; I’m not mentioning things that are impossible to overlook, like “How do I run the project?” or “What tools do we use?”.

There are three main areas I need to learn about when joining a project: people, product, and code.

People

It’s important to understand the politics of the project to be effective. You need to know how decisions are made. Regardless of the official org-chart, some people are more influential than you would guess at first. Teams also differ in how “democratic” or “authoritarian” the decision process is.

Some questions to ask yourself are:

  • Is there somebody “dominating” the team regarding opinions or decision power?
  • If you want to introduce changes to how the team operates, who exactly do you need to convince?
  • Does the team even have a say in important decisions, or are they always coming from above?
  • Do decision-makers rely on intuition alone or also on data? If on data, how careful are they in analysis? Often managers are very selective about looking only at data that supports what they already believe.
  • Is anybody a contrarian that responds with criticism to every proposal or idea? On the one hand, people like that might be a bit hard to work with, but on the other, they can help you make a convincing point that everybody else will agree with (at least that’s what I like to think; I’m probably such a person).

It’s immensely useful to understand people’s cultural backgrounds. People from different parts of the world tend to have different attitudes when giving and receiving feedback, agreeing and disagreeing, or being implicit or explicit about the context. Read The Culture Map for more about that.

Product

The challenge here is to get a big picture; product managers and other team members tend to be so deeply submerged in details that they might fail to convey the high-level information.

Questions to ask here are:

  • Who exactly is using the software? What are users’ expectations around performance, reliability, and accessibility?
  • How exactly is the product making money or otherwise contributing to the company’s goals?
  • What features are the most complex and problematic?

Code

The first thing I do after cloning a repository is to run git-churn. It shows which files have been touched by the biggest number of commits, and it’s a pretty good indicator of where the complexity lies within the project.

It might also be useful to run git shortlog -sn to see the commit count per author to have some idea about who is likely to be most knowledgeable about the repository.

In some teams, individual members specialize in particular parts of the codebase. You need to ask them about it and note it down so that you know who to ask for help when relevant.

Of course, some projects are more unusual than others, and from time to time, something will really surprise me; however, I found that getting answers to questions I mentioned let me get from clueless to quite effective pretty quickly. 

Wrong but useful beliefs

Wrong but useful beliefs

Here are some beliefs that I suggest adopting:

  • Everything can be tested automatically
  • There is always another way of modeling a problem
  • There is always a simpler way
  • It can always be faster
  • It can always be more user-friendly
  • There is always a way to split a task into subtasks
  • There is always a better name for a variable, function, class, etc.

It would probably be wrong to say that these beliefs are true – there might be “the best” way of doing something. But we can’t know; if we believe something to be perfect or impossible, we stop trying.

There are two effects of taking this approach.

The first effect is that you will try harder to find a better design/name/whatever aspect you think of. You will not be fooled to believe that you got it perfectly the first or second time.

The second effect is that you will recognize that perfection is not attainable. If forces you to perceive the problem in terms of trade-offs – how much is worth spending on it? How beneficial is it to make it better?  How much time will be saved in the future? You need to think not only about how much is the minimum but also how much is enough.

I got this idea from a book about memory techniques that recommended believing that everything, no matter how abstract, can be imagined.

Programmers’ beliefs

Programmer’s beliefs

Many of our day-to-day decisions when working with software are based on subjective opinions and beliefs rather than objective truth. Are you even aware of the software-related views you hold? What if you could change them on a whim? Are your beliefs useful?

When you identify a belief of yours, ask: how did you come to this belief? There must have been some experience or something you read or heard that gave you the idea in the first place. What if something else happened?

Here are some beliefs. There are a lot of people on both sides. I’m not aware of objective criteria that would show a clear winner in any case.

  • Testing is good vs. testing is not worth the time.
  • Static typing is good vs. static typing is bad (or not worth it).
  • Estimates are worth the time, or they aren’t.
  • Agile (or iterative approach) is good, or it is bad.
  • Performance matters, or it mostly doesn’t matter.
  • Working for big companies is better vs. working for small companies is better.
  • Pair programming is good vs. pair programming is bad
  • Object-oriented is the best paradigm, vs. functional is the best

Can none of these be correct in absolute terms, but are they more useful in some contexts?

Or maybe there are objective winners, but we don’t have a strong consensus about it for some reason.

It might also be that there is consensus, but it’s objectively wrong.

Imagine with sincerity holding the other side of the belief. What would change in your behavior? How would it affect your different beliefs? What are the traps it might create?

Beliefs are sticky. Once acquired, they change how we perceive experiences, continually strengthening whatever you already believe.

Beliefs limit what we can perceive. For example, if you believe that automated testing is good and a bug escapes the tests, you will perceive it as an exception and a reason to write more tests. On the other hand, if you believe that testing is mostly a waste of time, you will see the same bug supporting your belief.

An exercise I propose is to make a strong case for both sides. Then, write it down. It requires you to question your belief at least for a while.

Here is my attempt:

  • Testing is good vs. testing is not worth it
  • Static vs. dynamic typing
    • We should use everything we can to ensure the correctness of our code, and static types are one of the best tools in that. They render entire classes of bugs impossible or almost impossible. They make for better tooling, like refactoring support in IDEs. They help achieve great performance.
    • Static types muddy the clarity of code. Leave the type checking to the runtime; focus on what your code means; types are not adequate for that. They give a false sense of security. Some types of tasks (like metaprogramming) become unnecessarily hard.
  • Estimates
    • Estimating is very hard in the context of software, and it doesn’t bring that much value. It would be better to focus on estimating value instead.
    • Estimating is possible when done correctly. Refer to How to Measure Anything for an example model (in short: estimate the range that you are confident 90% and use math tools to combine multiple estimates).
  • Agile
    • Incremental methodologies like Agile, when done right, are the most reasonable ways of working in most cases.
    • Incrementalism prevents serious progress. We need to take a deep look and carefully plan what we do.
  • Focus on performance
    • We ought to be careful about the performance of software. We can’t rely on surfing the Moore’s law’s wave forever. Most software nowadays is actually slower than it used to be 30 years ago!
    • Make the system run correctly first, and only then consider tweaking performance if really needed. If something is automated, it’s usually orders of magnitude faster than a human anyway. “Premature optimization is the root of all evil.”
  • Working for a big company vs. startup
    • Big companies pay better. Many exciting problems require a scale that can only be found at big companies. See Dan Luu’s article on startup trade-offs
    • Big companies are doomed to work in the old ways, and if you want something novel, you need to look to startups. At small companies, opportunities are not restricted with red-tape yet. Only in a startup do you get a good shot at having a significant impact on the company. See Paul Graham’s essays.
  • Pair programming
    • Working in pairs helps get better code quality and prevent mistakes. It aids knowledge transfer.
    • Pair programming has low ROI because it halves productivity in some sense. It can prevent careful thought that is only possible in solo work. Pair programming is awkward and stressful for many people.
  • OO vs. FP
    • Object-oriented programming became dominant for a reason. There is an extensive body of practice and educational materials that teaches how to design OO systems well. OO is a big part of most popular programming languages, and we should take advantage of it.
    • Functional programming is superior because of its simplicity. It is more natural to people. It’s not the 1950s, and we no longer have a reason to avoid FP because of performance. FP is like mathematics, pure and beautiful.

The third way

It might be that neither side of an argument is truly right or wrong. Nuance and context matter. Or maybe not! It might be that one side of the argument is actually right.

I argue that you have a better chance of finding the most useful belief if you put serious effort into seeing all possibilities instead of limiting yourself to whatever convinced you the first time. But it might also be worth the time to look for alternatives. Is there a false dichotomy in how a choice is stated? Can one side be more right in some contexts and the other one in different contexts? Is there a third way of any sort?

Here I propose arguments for “the third way” for the same beliefs I mentioned before.

  • Testing:
    • Automated testing is one of the tools that aid in making good software. You still need informal reasoning and manual testing for overall excellence.
  • Static types
    • Static or dynamic typing isn’t the most significant difference between programming languages anyway. Some problems can be more natural to model with either dynamic or static typing.
  • Estimates
    • Estimating is valuable sometimes. Figure out the value of estimating in your context before engaging in it.
  • Agile
    • In practice, a mix of careful design and incrementalism is the best. Some projects might benefit from longer planning, while others are best done incrementally.
  • Perf
    • The value of performance can be estimated, and it can vary from system to system how much investment is reasonable.
  • Big company vs. startup
    • Reject being an employee at all. Be a consultant and choose clients, not employers.
  • Pair programming
    • Pair programming should not be a must or a must not. Programmers need to be free to work in pairs when they judge it will be more effective, but never forced to pair. It should be normalized to say that you like pair programming or not without being seen as antisocial.
  • OO vs. functional
    • Different problems are better modeled with different paradigms. Often either OO or FP can be used to solve a problem well. You can also mix both – see imperative shell, functional core.

Did you find any other programming beliefs that you hold? Let me know!

The worst tool makes easy things easier and hard things harder

Most tools (libraries, runtime dependencies, developer tools) that become popular make things easier; that’s why we use them!  Their popularity is based on simple cases because they are most apparent when deciding on adopting them. After all, it’s hard to consider complex cases and trade-offs without investing time into learning that tool.

But some of these tools make the complex cases even more difficult or otherwise worse off.

It’s easy to tell that easy things get easier; it’s hard to know that hard things get harder until it’s too late. If you don’t know the tool yet, you can’t judge the long-term effects. By the time you have spent a lot of time with it, you are heavily invested in it; reverting would be costly, and you would have to admit being wrong.

An example from my experience: at one company I worked at, we migrated a chunk of a shopping system from a “traditional” ACID SQL database to a NoSQL one that shall remain unnamed. The benefits were instantly visible: common operations were fast, the interface library we used was very pleasant to work with, it all seemed very nice.  Only after some time did I find out that many writes were being lost, seemingly at random.  After a lengthy investigation, I found out that we need to change our approach to writing collection types because of the inherent properties of that distributed NoSQL database (if you are experienced with distributed systems, it’s probably obvious to you what we did wrong).

I had problems getting anybody to acknowledge the source of the problem; that database was a heart of a new shiny system that the company was very proud of, and promotions were tied to its development. The solution I proposed would basically wipe out all the benefits of using that database in the first place.

Because I left shortly after, I don’t know if they ended up adopting my solution, finding a different one, or kept ignoring the problem.

Ultimately, a tool that brought convenience and other benefits to the most obvious cases (read and write of basic data types) was also responsible for complicated, more complex cases.

Another example is a popular query system that is presented as an alternative to REST. The base cases look neat in examples. It comes with a library that is a breeze to use. But at the edges, things get worse – caching becomes more complicated, inspection tools built for HTTP often become useless, control flow in complicated cases gets harder to manage. The thing that benefitted the most from the tool – the fairly efficient read of nested data over HTTP – was not the major problem in the first place.

Both of these technologies have tons of devout users. Naturally, they are happy to argue about the benefits of these tools – and they are correct, there are benefits! – but the arguments revolve about the most common cases, never the problematic edge cases.

When thinking about performance, it makes sense to optimize for common cases. But with development time, I propose to adopt an inverse approach – pay special attention to how edge cases will be affected because most development, debugging, and bug-fixing time is going to be spent there.

How to do it? Explicitly ask: what trade-offs does this technology make? How easy is it to inspect things deeply when something goes wrong? If you need help, will there be anyone to provide it?

Do something different for a week

I get bored rather quickly in the programming context. Here’s a trick I sometimes do to keep things interesting – I pick one aspect of my development process and experiment with doing something different.

I compiled a list of things to try. Some of them I already tried, others not.

In my experience, the results are often surprising; things that seem outlandish work out just fine, or something that I had high hopes for ends up being “meh.”

The meta lesson for me here is that I should not praise or bash a practice without giving it a solid try.

I recommend trying things out for at least a week, preferably two. In most cases, you’re going to experience enough nuance to make an informed decision whether to incorporate it into your regular workflow.

Some of these should probably not be tried for the first time in a professional context where you work with other people in the same code repository, while others can. Use your judgment.

Some of these can be more or less relevant for your programming language or environment (web frontend, backend, anything).

Make components as small as possible. Make the units of code (functions / classes / components / whatever relevant for you) as small as possible.

Isolate examples of code in sandboxes. If you work with web frontend and encounter something you are confused about, recreate a minimal version of it in codepen.io or jsfiddle. This makes it easier to understand what’s going on.

Do new kinds of tests. If you haven’t tried unit testing, property-based testing, performance testing, or any other type of testing you can find, try it now.

Use different way of splitting files / functions / components. Question your way of organizing code.

Draw everything before typing. For every unit of code, visualize it somehow using pencil and paper.

Call what I’m doing before typing anything. Inspired by Kent Beck’s Implementation Patterns.

Measure performance of a feature, improve it. Pretend you’re on a more performance-restricted platform than you are. E.g., in web frontend, use browser’s devtools to limit CPU and internet speed.

Implement twice; pick the better version. For every non-trivial feature, try implementing it twice, with different approaches. Only after doing both evaluate which is better in your context. Inspired by John Ousterhout’s A Philosophy of Software Design.

Git reset after a day. Try short-lived branches (https://articles.coreyhaines.com/posts/short-lived-branches)

Record screen and watch yourself. Record your screen for 30 minutes or more. Then watch it. Are there any obvious improvements you could make in your workflow? Are you surprised by anything?

Pretend to be streaming. Talk while you program as if you were doing recording a screencast for other people.

Use debugger a lot. If normally you do not use a debugger, try using it for a week every time you feel confused by what your code is doing.

Avoid the debugger. If you usually use debugger a lot, try avoiding it for a week. Does your approach to writing code and debugging change?

Automate everything you can in your editor/shell scripts/your main programming language (pick one at a time). If you see any activity that is either awkward or repeated a lot, see if you can automate it, even if it doesn’t seem “worth” it. Are you surprised by how much time it took or how easier the task became? Did you learn anything new about your editor/shell scripting/your main programming language?

Work without any editor plugins. If you use an editor with plugins like vim, emacs, or vscode, try removing them for a week. Did you learn anything new? Will you use the same selection of plugins after this challenge?

Make things extra consistent. Pay extra attention to things being consistent (this is somewhat subjective)

Use different windowing strategies. If normally you use multiple displays, try working with just one for a week. Or try using as small terminal/editor/browser windows as necessary.

Use a new way of visualizing information or program flow. Examples include excel spreadsheets, OmniGraffle, or dot.

Have you tried any of them? Do you have ideas for more? Let me know!

Good efforts preserve bad systems

For many systems, constant limping is the status quo. Extra efforts of well-meaning employees often prevent the system from failing completely; it also prevents it from improving.

By a system, I mean both a “software system” and the company as a whole.

A software system can be limping by being unreliable, slow, and hard to work with; a company can be limping by barely breaking even, being full of dysfunction, or being in regular crisis mode.

Incentives

In every work environment, some things are valued more than others. By “valued,” I mean here what is rewarded, not merely given lip service. These incentives shape the system, and the incentives are never perfect because management always has a limited understanding of the system it manages.

I think most workers are aware of that. They know that there is a dissonance between the work that “should” be done (in a sense, the work most helpful to the system) and valued work. Different people will respond to it slightly differently, but for most, it will be a mix of doing both. Most people aren’t purely cynical and ignore all but what’s valued, but most don’t ignore the incentives completely.

In many cases, doing only what is valued by management would not allow the system to survive. For example, if management values new features more than bug fixing, and everybody responds to that incentive perfectly, the system would crumble. Thus, the status quo is only possible because people put extra effort into doing what’s needed for the system to survive.

That extra effort is frustrating because it is:

  1. Tiring, especially when done constantly
  2. Rarely rewarded

However, it is usually seen as something positive, heroic even; going against the bad system is praised by those who see through it.

I disagree with that notion.

By doing that, we are not only solving problems; we are also preventing the same problems from being acknowledged by the management, and this leads to bad long-term effects.

The same problems will come again and again, without any additional resources or appreciation for solving them.

For example, if a company doesn’t really value reliability and underpaid sysadmins are working extra hours to patch the system with ad-hoc fixes, this will continue until the uptime goes down enough for management to notice. If the uptime never goes to that critical level, nothing will change, because why would it? On the other hand, if the number of embarrassing outages reaches some threshold, management will recognize that systemic changes need to happen; it might be hiring more sysadmins, adopting new practices, or rewarding prevention more than before.

We’re not ready yet

Of course, the big problem is that people don’t want to be seen failing; it is often punished in some way (it might be harder to get a promotion or a raise), and it just plain feels bad.

Unless there is a cultural change in how we perceive failures, I don’t expect anybody to act differently.

What would it take to change the culture? First, we need to learn to embrace failure.

Embrace failure

The solution is to make failure safe, or at least non-catastrophic, both individually and systematically.

On a personal level, it is hard to admit a failure even without incentives aligned against it. Ego can hurt. You need to shift the framing from failure being bad to failure being a valuable lesson. You also need cold reasoning to recognize whether the failure should be attributed to your mistake, to bad luck alone, or to the system you’re in.

On the company level, you must ensure that people will not be punished by bringing bad news. You need the shared attitude of being ready for improvement; too often, it is assumed that the company is already an optimized system and that the failures should be attributed to mistakes of individuals or luck.

Get rid of the culture of internal competition; it invites pushing blame around.

Have you experienced a good (or bad!) systemic change after a visible failure at your company? I would love to hear about it.

Preferring employers with hard interviews is a bit silly

There is a thing that I’ve heard from at least 2 of my programmer friends that I find a bit irrational.  Both of them were looking for a new job, and they chose the employer with the hardest interviewing process that made an offer.

On the one hand, I can see why they would feel that those processes were more valuable – they put more energy into it, so it just feels like by investing more, they should get better jobs.

However, I don’t think there is a significant correlation between the difficulty of interviews and the “quality” of people working at that employer.

In my experience, relying on anything told to you in an interview is risky. Interviewers have incentive to misguide you – they are basically promoting their company.

A better way

A better strategy is to go for the larger number of prospective employers and choose the best based on salary and information from less biased sources. Trusted friends that work or used to work there are the best source. Even writing to a random employee on LinkedIn is less likely to be as biased as HR.

I think it’s a better bet to interview three companies with a single-round interviewing process than one company with a 3-round process.

And if you’re a programmer, you are probably more limited by your time than by the number of available interviews, so it makes sense to optimize accordingly.

If you find out that the prospective employer uses a complicated interviewing process, say no, thank you.

Hard interviews have consequences, but maybe different than you expect

Hard interviews are certainly selecting for something. But I doubt it is skills, experience, or even working under stress (working under pressure in actual work is different from a hard interview).

What it is usually selected for is two things:

  1. First impression – how well you match their stereotype of a programmer in how you look and talk.
  2. To what extent is access to your knowledge hampered in an artificial, stressful situation that an interview is.

Both points contribute to a workplace becoming a monoculture and cause missing out on many valuable candidates without actual merit. And the more involved the interview, the more bias it invites.

What to do?

As a candidate, ignore companies with hard interviews.

On the altruistic level, by not working with them, you avoid contributing to the problem of biased interviews.

And in purely pragmatic terms, they are just not worth your time if you have others to choose from.

Programmer’s emotions

From the outside, programmers seem to resemble the machines they program – cold, emotionless, unmoved.  But the reality is different – most moments in programming are full of emotions.

What are these emotions?

Confusion. Much of the time, I am confused by code. Something is working differently than I expect it to, or I can’t get my head around a line of code.

Anxiety. When the context of the project is stressful, deadlines are near, and I deal with a complicated piece of code, I tend to feel anxious.

Shame.  When I discover that something I wrote some time ago is faulty, I feel ashamed. Or when somebody finds an obvious mistake in my pull request.

Irritation.  When I am confused for too long, or somebody nit-picks my pull request. Or when something takes much longer than I expected – which happens often.

Hesitation.  Sometimes I feel like in a dead-lock, unable to decide between different choices with complicated or non-obvious trade-offs.

Inadequacy. When I need to step outside of my comfort zone, I am often baffled by how small that zone is. On a logical level, I know that I can’t know everything and nobody expects me to, but it still makes me feel bad.

Silliness.  Sometimes I find out that a solution to a problem is hiding in plain sight, and I feel silly.  It is often accompanied by relief.

Satisfaction. Every time something works the way I expected it to, a jolt of satisfaction kicks in.

Excitement.  When I discover a new way to do something.

Hope.  For a short moment in the middle of my feedback loop, when I just coded something and I’m waiting for the screen to refresh or for tests to pass.

Pride.  When I made something neat or something worked on the first attempt.

Relief.  When I solved a problem I had been struggling with for a long time.

Is experiencing emotions much different for different programmers? I don’t know; I have direct access only to my own emotions.

Are emotions bad? No. They are necessary to get going.

Some of them are harmful, and some of them are more useful than others.

Anxiety, inadequacy, and shame are just not helpful in programming.  It is good to reduce them if we can.

Confusion, irritation, and hesitation are rather negative emotions per se, but they can be useful or even necessary.

If you are missing something in your mental image, you should feel confused. That’s how you know you need to explore.

Irritation, in small amounts, can lead to positive action. But it only works if you are irritated by something within your control.

Hesitation can be harmful when you spend disproportional time making decisions, but some hesitation is useful to signal that you need to proceed carefully.

Positive emotions are obviously helpful.  They make the act of programming palatable for humans.

But they can also lead to non-optimal behaviors occasionally.

Programmers are often choosing things that are exciting rather than important. Chasing the newest frameworks and programming languages is a huge timesink.

Pride can cause us to ignore the shortcomings of an overall neat solution.

Relief may fool us to stop being careful.  And it never pays to be careless when programming.

What other emotions do you experience when programming? Do you experience emotions differently than I do?  Let me know.

Life is too short to depend on unstable software

When committing to a piece of software, favor those created by people who value backward compatibility.

By “piece of software,” I mean any source code dependencies (libraries, frameworks), runtime dependencies (e.g., databases, web servers), or tools (e.g., editors).

Different projects have different cultures of preserving the stability of programmable interface, user interface, and default behavior.

Why should you care?

Stable software is less work down the road when it comes to upgrades.

Stable software is better understood.  Documentation and third-party guides will stay relevant longer; lessons learned will remain valuable. Trade-offs are acknowledged.

Third-party plugins will keep working.

It is probably tested better.

If you have to deal with an older version (e.g., because you are restricted by repositories available on production servers, or you have to downgrade for security reasons), you’ll stuff will still work fine. Also see: Would your code work after a trip back in time?

Many of these aspects have characteristics of compound interest; if the community around software can safely assume that their guides and plugins will stay relevant, its members are more likely to contribute.

How to tell if a piece of software is likely to remain stable?

Age.  If a piece of software is old but still in active use and maintenance, it is more likely that its creators got something right and kept it stable.

Marketing copy.  Look at the project’s website; sometimes projects are explicit in what they value, and it is not always the stability.

Size and scope.  The larger the scope, the harder it will be to remain stable.

Version history.  For projects using SemVer, frequently changing major version numbers is an obvious red flag.  Note that many old, stable projects don’t use SemVer.

Upgrade guides.  If they are long complicated, beware.  But, of course, a total lack of upgrade guides is not a good sign, either.

Of course, there is a flip side; you do miss on bleeding edge.  But it’s OK – just because something is newer does not automatically make it good.

I’m surprised that more people don’t consider stability when picking dependencies. I would be more understanding of people who were consciously choosing projects that don’t value stability. Still, many times when I see somebody picking a new library, they don’t even try to estimate how stable it is.

Even when using a stable project, don’t rush to get dependent on the latest features because they are less likely to be stable, well understood, or available if you are forced to use an older version.

And one final piece of advice: don’t ever let your career depend on an unstable platform and tooling unless you directly profit from that instability.

How to code when you’re tired

How to code when you’re tired

In the past, I was experiencing some sleep disorders that affected my focus during waking hours. However, since I had to wait a few months before getting the treatment, I thought it would make sense to adjust my workflow temporarily.

It should be obvious that you want to fix the underlying problem, but if you don’t have a choice, I propose a few things you can try (at least temporarily).

I’m not making assumptions as to what your underlying issue is, so research that separately.

The first things that suffer from being tired are short-term memory and the willpower to ignore distractions.  It is also easier to experience anxiety.

Cut feedback loops as short as possible.  If there is some action that you need to wait for, see if there is a way to speed it up, e.g., by setting a temporary keyboard shortcut.

Do not multitask—close apps that are not necessary at the moment and full-screen your editor.

Turn off all non-essential notifications from your devices.

Split tasks into smaller parts.  Name them and write them down. Prepare your environment for a particular task (e.g., rearrange windows or set up temporary keyboard shortcuts). Then execute the parts serially. This way, it is easy to return to the task if you take a break or get distracted.

Take breaks regularly and ask what you are doing. Make breaks more frequently than when you would without being tired.  E.g., work for 25 minutes and take a 3-minute break.  During the break, ask: what was I doing for the last 25 minutes? Why was I doing it?  It’s too easy to get carried away by yak-shaving when you can’t remember what’s the end goal.

I also suggest experimenting with a slight variation: if you hear a timer beep, but you feel “in the zone,” skip the break, but don’t do this more than once in a row.

Write. Things. Down.  Have a notepad or a tablet with a stylus.  I use an iPad with Apple Pencil and Concepts app with its endless canvas feature.  Draw diagrams.  Make something you can look at if you get lost.

Write more code comments than you usually would.  Unlike the usual comments, they are only meant as immediate, temporary help in remembering what’s left to be done in a particular code component. You can delete or edit them later if you believe that the code is expressive enough.

Plan at least 2 (sub-) tasks ahead.  If I only plan 1 task, get it done and take a break, I tend to feel anxious because it’s unclear what comes next.  So I define one extra step ahead so that I always have clarity about what to do next.

There are also a few somewhat weirder things that work for me, but I’m not sure if they would work for everybody:

I have syntax highlighting disabled in my editor.  The color soup of a regular IDE is too distracting to me.  It takes time getting used to, though.

Learn to type fast so that you don’t have to hold things in your head for a long time.  It’s somewhat controversial, and a lot of people feel that learning to type fast is not worth the effort, but I do believe that being a competent typist helps save mental energy on what really matters.

Work in a well-lit room.  However, some prefer the opposite.

I prefer silence or gray noise to music when I’m tired; I’m fine listening to music and code when I’m not tired, though.

For some reason, sometimes, I find it easier to work on smaller screens.  I guess the brain has less to process when the amount of information on the screen is limited.

Most of these tips are applicable even when you are not tired since the short-term memory and willpower needed to ignore distractions are always a limited resource.