/ ~7 min

The Cult of Best Practice

Best practices are, despite the name, not universally good.

Many best practices in programming don’t meet the definition. They spread not based on merit or evidence but thanks to authority bias and social utility. As they spread, they lose nuance. As they lose nuance, they become easier to evangelise. Combined with lack of experience, they can lead to cult-like behaviour.

Think of an engineering team that got obsessed with a best practice, like test-driven development or writing user stories, to the point of detriment. Many developers have fallen into that trap, myself included.

Why can best practices be harmful? Why do we like following them? When and how do they go wrong? To answer these questions, we need to understand where they come from and how they spread in the context of programming.

Impostor Best Practices

The main reason some programming best practices are harmful is that they are not real best practices.

Look at the official definition: “A best practice is a method or technique that has been generally accepted as superior to any alternatives because it produces results that are superior to those achieved by other means […]”. 1 The key parts of the definition are “generally accepted” and “superior to any alternatives”.

The problem with many programming best practices is that they pretend to conform to that definition, but they do not.

Some best practices aren’t generally accepted. They come from different, less reliable sources of authority. It could be a prominent individual or a specific community who present something as widely accepted when it’s their own experience or opinion.

We might have a proponent of object-oriented programming saying that it is an accepted best practice, but not everyone agrees. If the proponent is respected and followed in the programming community, many people will put a lot of weight on their opinion, but that doesn’t make it generally accepted. There are different competing paradigms each with their pros and cons.

Some best practices are not superior in outcomes. They claim they are, but objectively there are equivalent alternatives. For example, is functional programming superior to object-oriented? We can’t say one is better than the other, even though they are both presented as a best practice by some.

The problem with superiority is that most programming best practices aren’t evidence-based. Programming is too young, fast-changing and complex to have done the research to establish the evidence for something consistently producing better outcomes. We work in the world of opinions, feelings and anecdotal evidence.

Some best practices are also very volatile. Fast-moving languages and frameworks declare something best practice and supersede it a year later. That isn’t inherently wrong, but it’s a sign of how fast our understanding of best can evolve, while best practices are expected to be time-tested. 2

However, not all best practices in programming are impostors. There are time-tested, generally accepted, and superior practices. For example, the general idea of automated testing now meets that definition.

Nor are all impostor best practices bad. Not being universally accepted can mean they aren’t universally accepted yet. Not being superior in general might be a scope problem, and the practice is superior in specific situations.

However, these cases need to be interpreted with nuance, which brings us to the next problem.

Lost In Translation

Good best practices are simple and universal. Many programming best practices tackle complex issues that require nuance and context — but that nuance and context get lost as the best practice spreads.

Consider this example: someone, through a lot of trial and error, found a good way to tackle a problem. Because of the learning process, they understand the nuances in how and when to apply it.

The solution works for them and they start sharing their lessons as best practice. This gets picked up by people who skipped the learning and went straight to applying it, missing out on some nuance. Those people share it again. A new cohort of people picks it up. They misunderstand it more and share it again.

Soon, all understanding of why the practice works is lost. People are parroting it as a simplified, absolute catchphrase. “Always write the tests before the implementation”. 3

The complexity can increase over time too. An idea that was originally simple that required a lot of nuanced interpretation is made increasingly complex by people who miss the point.

Take the example of “agile”. Originally a set of 12 principles, it has been turned into monstrous frameworks that oppose those principles by consultancies that sell organisational transformation.

Once all nuance is lost, the conditions are perfect for the idea to spread. It originated from someone with respect, experience, and authority. The simplicity makes it sound easy. People who don’t understand it sell it as a panacea. As a result, people can learn about it quickly and start evangelising. Despite its merit-based origin, it has become a social phenomenon.

Authority Bias and Social Utility

The social aspect of how best practices spread helps us answer the next question — why do we like following them?

When we lack the experience and confidence to form our own opinions, we defer to the next best thing: an authority. This is a well-known cognitive bias. 4

Thanks to the authority bias, best practices have a social utility. They give us something that people are biased to believe that we can lean on. There are many examples of this utility:

The Cult

Because of the social nature of best practices, it’s easy for herd mentality to kick in.

Imagine a team of inexperienced developers with no one seasoned to lean on. They can’t make all decisions in an informed way — following best practices is the next best option.

They struggle with something, and they search for a solution. They come across a simple-looking practice that addresses their problem, supported by someone prominent. Is your code buggy and unreliable? Write more tests. Is your code hard to test? Adopt test-driven development! 3

Once a solution like that is found, everyone is motivated by the authority and social utility of it. It gets adopted ad absurdum. All nuance is lost. Soon you have a team that insists that every ticket is written as a user story, or that every class has to have tests because it’s best practice.

Way Out

It might seem obvious that adopting something obsessively is a bad idea, but many teams out there operate exactly like that.

The way out of the cult starts with understanding what the commonly presented best practices are — a social phenomenon.

Once we realise that, the first step is understanding where they come from and what problem they solve — understand their origins and the subtleties of applying them successfully. 5

The next step is to make our own mistakes and learn from them. Break the rules and understand what happens when we don’t follow a particular practice. Follow it to its logical conclusion and see what happens then. 6

The trial and error learning involved gives us knowledge much deeper than what we would gain by following the rules.

Having made our own mistakes, the third and final step is to form our own opinions and speak up.

If we’ve understood where a best practice comes from, and we’ve tried what happens when we don’t follow it, we should have the confidence to make and defend our own opinions about it. We can help the rest of our team see the full picture and break the cult.

Going against the flow like that can be hard. Convincing the rest of a team that something they believe in isn’t what it promised to be, requires skill and patience. Telling them won’t be enough. You need to take them on the same learning journey you went on. That’s how you make progress.

To short-circuit that learning process and prevent best practice cults from forming in the first place, you need to have enough senior engineers on your teams. Each team needs to have someone who is experienced and confident enough to become a trusted authority for their colleagues. Someone who can make informed decisions and bring the necessary nuance.

We need to encourage open-mindedness and independent thinking. We need to scrutinise best practices and understand them in depth. That’s how we stop the cult.


  1. https://en.wikipedia.org/wiki/Best_practice 

  2. As an example, look at the JavaScript community and some of the popular frameworks like React. Because React is evolving so quickly and people are gaining more experience in what works, something that has been officially recommended by the framework authors can become deprecated and replaced with a different approach very quickly. Think about how quickly hooks replaced the older APIs.

    Another example of an impostor best practice from the React community was Enzyme. It is a testing library developed by AirBnB who was an early adopter and had a lot of respect in the community. For a while, it was seen as a best practice for testing React code. Within 2 years, most people realised it had significant shortcomings and better libraries were adopted. I think it got its momentum from the authority of the authors and from a lack of better alternatives. Hardly a true best practice. 

  3. Not picking on TDD specifically here, but it is a great example of something useful that requires a lot of nuance to apply correctly that got lost in translation and reduced to a catchphrase.

    The original idea that Kent Beck developed had a whole book written about it. The evangelism it spawned vastly simplified some of the ideas. Back in my university years, I saw a talk given to 2nd year engineering students that equated TDD with the test, code, refactor loop and presented it as a universally better way to write code, the latest “industry best practice”.  2

  4. https://en.wikipedia.org/wiki/Authority_bias 

  5. Take testing as an example. I wrote about the need to interpret the purpose of tests with some nuance in order to achieve the desired outcome — reliably shipping working software. According to that example, the test pyramid commonly seen as best practice is outdated. 

  6. To continue with the testing example, we know what happens when we don’t write tests. But observe what happens when we try to test every single thing. At some point, the value from adding more test significantly diminishes. It might even be impossible to truly test everything.

    Or take functional programming as another example. Arguably, about 80% of it is incredibly useful and makes for much better code. Taken to its ultimate conclusion in purely functional languages, things like doing I/O get a lot more difficult compared to strict languages.

    I’m not trying to say one way is better in those cases — but they both involve tradeoffs that we should understand before adopting them.