A reader writes:
I don't know if this really qualifies as a science question, but: How do bills for zero dollars and zero cents get sent to people?
I'm guessing that it's something like the process you explained that leads to crazy negative numbers in installer disk-space calculations - maybe the system's screwed up and thinks you owe negative money (or you really do, because you overpaid), but there's a sanity check that rounds negative numbers to zero, and then fails to stop a bill being sent anyway.
Am I right?
Jiri
Exactly this sort of thing has been referred to since time immemorial, or at least since the early days of office automation...
...as a "computer error".
Which it isn't, of course. Once in a very long while a cosmic-ray strike or failing power supply or actual hardware defect does really cause a computer to make an error, but the overwhelming majority of "computer errors" are actually programmer errors. The computer is doing exactly what it was told to do, whether that's sending a bill for $0.00, or charging a startled pensioner for a hundred million kilowatt-hours of electricity, or falsely saying an e-mail came from FinkyPieheimer@zoobatz.com.
One programming error that can lead to a zero dollar bill - and, in due course, to second requests and final demands and then the attention of lawyers, if only because the lawyers are happy to round the ten seconds needed to recognise the mistake up to about ten billable hours - is using the wrong kind of variable.
In the real world, money comes in dollars and cents, pounds and pennies, rupees and extremely un-valuable paise, and so on. There's no such thing as a fraction-of-a-cent coin.
In computers, everything can be chopped up into arbitrarily small pieces, if necessary. There may be some good reason to do this for certain calculations applied to currency amounts (though I can't think of one), but if you do, before any of the results get turned into actual money being paid into accounts or demanded from clients, the numbers should be rounded into an integer number of cents (or whatever other currency unit's being dealt with).
(This process can be exploited, in the classic "salami slicing" scam where a great many fractions of a cent are sneakily diverted to the scammer's account by, for instance, always rounding down, even when the fraction being rounded is larger than 0.5.)
If a programmer uses, say, a single-precision floating-point variable to hold a monetary value, it's easy for the limited precision of the variable to, when mathematical operations are performed on it, end up at 0.9999 (probably with a few more digits) instead of 1, or 0.0001 instead of zero. In the latter case, a system that doesn't round off the imprecise value to what it should be, and subsequently starts the billing process on any account where the amount owing is greater than zero, will send idiotic zero-bills to customers.
(Or the only slightly less stupid version, a bill for an amount less than what the company paid to post the bill to you.)
There are many other ways for this to happen, though, thanks to the many ways in which programmers can make a mistake. A billing system could, for instance, fail to notice previous overpayment, decide to send a bill, then apply the overpayment to the account balance and send a bill for the net amount, which could be zero or negative. Or it could decide to send a bill to a customer who legitimately owes money, and then accidentally print the amount that some other, fully paid-up, customer owes on the bill.
Just because a company's got thousands of employees and annual turnover greater than the whole economy of some nations does not preclude this from happening. As Daily WTF readers know, staggeringly expensive "enterprise" software can be very, very badly written.
Psycho Science is a regular feature here. Ask me your science questions, and I'll answer them. Probably.
And then commenters will, I hope, correct at least the most obvious flaws in my answer.