Code for humans
Why it is important to write code in a way that other humans understand.
Ever since I wrote my first lines of code I was always burdened with the afterthought of “is this the right way of doing this?”. For those expecting a big revelation and a guide on how to write the perfect code, I will disappoint. Not simply because I frankly don't believe there is such a thing as perfect code, but rather because this article is about the importance of writing code that other humans can understand. Ironically, if you're looking for a way to write better code, you might just find something valuable too.
Whether it is a compiled or interpreted language you're starting with, when we first learn how to code, we usually go through the following process: write some code, execute it, and then something happens.
While it is an incredibly fun and interesting process, it inadvertently enforces the belief that what we write is meant for the computer. We tell the computer what to do and it does it. Maybe it is an evolutionary thing, a leftover of an era where programming indeed was a way of instructing the computer in various ways like in the era of punch cards.
The initial feeling of creating a piece of software by trial and error and finally producing something that works is just incredible. Remember that one time, when you spent half a day writing a script to automate a task that takes a few seconds of manual labour monthly to save time? Yeah, it will never turn into a positive investment but it sure felt good.
While it is true, that eventually what code we produce will be processed by the computer and every line of code will be an instruction for the processor, modern software development is much more than simply writing executable code.
Computers process code, humans read them
Most of today's software is not created by single individuals, rather they are built by teams, even multiple groups of teams. Chances are, the code you write today, will eventually be changed in the future. Either because requirements change, maybe the initial understanding of what the code should do changes, or simply because there is some bug that needs to be solved. Either way, someone is going to have to take a look at your code and alter it in some way. Who knows, maybe it is going to be you.
When you need to change code, there are two major factors that decide how easy it is to do so: how easy it is to understand the code and how well the implementation reacts to change. The latter one deserves a post of its own but let's just leave it for now. One could make an argument for changing code without understanding it, but I think you need to understand the code at some level eventually to make a change in it.
Understand this: how horrible a codebase is, how many hard coupling it contains, best practice violations, lack of abstraction, too much abstraction, what have you; if the software does everything it was meant to do without an issue, that is a good software.
In practice though, such software is a myth, because in reality even if a software is flawless in function, new functions are often needed and that involves a change.
Every time we are mad about a software's code, it is because we need to change it and that change is painful, either because it's hard to understand or introduce a change.
Change is the reason we hate bad code
Throughout my career as a software engineer, I've seen many developers (myself included) construct code that is so complex despite its simple function, that a few weeks pass and even the developer themself was not able to understand it without diving deep again. Some of them even revelled in the fact that they created something extremely complex because that is how they measure good work.
Sometimes this is called over-engineering. To me, over-engineering is the process of making a simple thing complex for the sake of the craft. The intention behind it is different than writing hard-to-understand code because of a faulty way of approaching the problem or simply only caring about the outcome after execution.
When people can't let go of their initial way of thinking about an implementation they usually try and bend the environment/
Think about it: every time you introduce language or framework-specific tools to your implementation, you add a layer of complexity to your solution. Now, only those who have that specific knowledge can understand your code easily. Obviously, the goal is not to write code that any developer can understand but to know when you are allowed to add layers of complexity to your implementation to make it better in some way (think performance, usability, flexibility, etc.).
There are other times when developers become too invested in their solution, they think all the work they've done up until that point would go to waste if they introduce a new approach. Or perhaps they are not dissatisfied with their solution enough to try a new angle. Region beta paradox in practice.
Sometimes developers tend to think in deltas: the code difference between executions. Our brain is good at incremental understanding, so it comes naturally to start with something and iterate on it a few lines at a time until the desired outcome emerges. Don't get me wrong, software development, in my opinion, is a highly iterative process. However, you need to do other kinds of iteration other than adding code to previous code. If you only care about adding more code without reevaluating existing code, you'll most likely end up having a code that is hard to process. Not for the computer, it's all the same for the processor, it will be hard for your colleagues, but give it time and it's going to be hard for you too.
Code like that is easy to produce, because you gradually make it more complex, so you only need the cognitive capacity of one layer at a time. However, you can turn this into an advantage too. If it's true that code is easier to understand in small doses, you can use this concept to structure your code in a way that unfolds layer by layer, making it easier to understand and work on.
If it takes less time to completely rewrite a piece of code than to understand and change the functionality, someone did not bother to write the code for other humans to read.
Where to go from here
I wanted to avoid actionable advice in this article because there is no definitive ruleset that you can follow to write human-readable code. It is not something you can jam into a linter and have the code formatted on saving. Coding for humans is a way of thinking and a philosophy. Something that you always keep in mind when writing code.
However, if you insist on a few practices that regularly are a good marker of easy-to-follow code, there is a part II. of this article that might help you out.