I feel I misrepresented my intentions in a previous article. In that article, I gave a code example:
int i; char c; i = 0; c[i++] = ‘a’; c[i++] = ‘b’; c[i++] = ‘c’;
Christopher commented that the code gets optimised by most modern compilers (can you see any inefficiencies in the code?). I agree with that. There was a time when I studied and adhered to basic code optimisations. Practices such as unrolling small
for loops and simplifying boolean check conditions. I still do the basics (out of instinctual habit), but it’s not that big a deal anymore.
Computers have gotten to the point where small inefficiencies don’t really matter anymore. High computing speeds overshadowed any minor stalls. Compilers are also smart enough to reduce inefficient code in the first place.
So why did I cite that example? Because you haven’t had the benefit of Moore’s Law.
Imagine taking over someone else’s code and after wading through reams of code, you found that essay of a function could be reduced to a single line of code without loss of understanding nor purpose. Imagine how much effort it took you to understand all that code before you understood its purpose. If you’re lucky, there would be comments and documentation. If you’re luckier, those comments and documentation would even be relevant and up to date.
In human-first programming, you’re not just creating software for the end user, you’re also writing code for another programmer to read. The compiler doesn’t care how obfuscated the code is. It can read it just fine. You, on the other hand, might have a little trouble with the code.
Just because the code gets compiled anyway doesn’t mean you can be sloppy.