Most tools (libraries, runtime dependencies, developer tools) that become popular make things easier; that’s why we use them! Their popularity is based on simple cases because they are most apparent when deciding on adopting them. After all, it’s hard to consider complex cases and trade-offs without investing time into learning that tool.
But some of these tools make the complex cases even more difficult or otherwise worse off.
It’s easy to tell that easy things get easier; it’s hard to know that hard things get harder until it’s too late. If you don’t know the tool yet, you can’t judge the long-term effects. By the time you have spent a lot of time with it, you are heavily invested in it; reverting would be costly, and you would have to admit being wrong.
An example from my experience: at one company I worked at, we migrated a chunk of a shopping system from a “traditional” ACID SQL database to a NoSQL one that shall remain unnamed. The benefits were instantly visible: common operations were fast, the interface library we used was very pleasant to work with, it all seemed very nice. Only after some time did I find out that many writes were being lost, seemingly at random. After a lengthy investigation, I found out that we need to change our approach to writing collection types because of the inherent properties of that distributed NoSQL database (if you are experienced with distributed systems, it’s probably obvious to you what we did wrong).
I had problems getting anybody to acknowledge the source of the problem; that database was a heart of a new shiny system that the company was very proud of, and promotions were tied to its development. The solution I proposed would basically wipe out all the benefits of using that database in the first place.
Because I left shortly after, I don’t know if they ended up adopting my solution, finding a different one, or kept ignoring the problem.
Ultimately, a tool that brought convenience and other benefits to the most obvious cases (read and write of basic data types) was also responsible for complicated, more complex cases.
Another example is a popular query system that is presented as an alternative to REST. The base cases look neat in examples. It comes with a library that is a breeze to use. But at the edges, things get worse – caching becomes more complicated, inspection tools built for HTTP often become useless, control flow in complicated cases gets harder to manage. The thing that benefitted the most from the tool – the fairly efficient read of nested data over HTTP – was not the major problem in the first place.
Both of these technologies have tons of devout users. Naturally, they are happy to argue about the benefits of these tools – and they are correct, there are benefits! – but the arguments revolve about the most common cases, never the problematic edge cases.
When thinking about performance, it makes sense to optimize for common cases. But with development time, I propose to adopt an inverse approach – pay special attention to how edge cases will be affected because most development, debugging, and bug-fixing time is going to be spent there.
How to do it? Explicitly ask: what trade-offs does this technology make? How easy is it to inspect things deeply when something goes wrong? If you need help, will there be anyone to provide it?