Recently, I had the honour to delve into the depths of some legacy code. The program handled some financial data, and there was a section where the value had to change sign. The exact details aren’t important to the story, so let me come up with a suitable example. Uh… payment!
The monetary value stored in the database was considered positive (or profit). When people pay you $10, you have a record with $10 in the database. What happens when you pay other people? You use a negative value. This keeps the column data storage logic consistent.
So the variables used in the program stored positive values of the amount you need to pay other people, performing calculations as normal. When it came time to store in the database, there was a last line of code before the database update:
decPayment = decPayment * (-1);
Was -1 a magic number (as in, it was hardcoded, and could be some other number)? Was the multiplication in error (maybe it was supposed to be addition)? Was this line of code even supposed to exist?
It turned out that the programmer wanted to change the sign of the variable value. This method of changing sign has some downsides:
- Multiplication is expensive. Maybe processor chips are fast now, but still…
- The intent is not clear (what is it trying to achieve?)
- It has more characters than the more optimal way
So what’s the “more optimal way”?
decPayment = -decPayment;
You don’t have to multiply by -1 to change the sign of a variable. Just directly negate it.