12.5 is just as close to 12 as it is to 13, so why should it be rounded up and not down? Indeed my guts tell me that rounding down is even more natural. Why? Because the distance between 12 and 13 is two times 0.5, and the value 0.5(0) falls within the first half of this distance, so I would round it down and not up.
I wonder when and by whom was it chosen that you should round 0.5 to 1 and not to 0. There certainly doesn`t seem to be a scientific background for it. Or am I wrong?
By the way there are several other popular rounding methods to the traditional, including:
- round-to-even
- stochastic rounding
The round-to-even approach introduces a new rule - if the last trailing 5 is preceded by an even number, it gets rounded down, if by an odd - it gets rounded up. It is better than the common rounding procedure because with large amounts of data it reduces the total error produced by rounding.
Meanwhile stochastic rounding introduces a randomness to the procedure - sometimes you round .5 up and sometimes you round it down. While on some level it may seem "unfair" to the observer as the randomness implies that you could get different results when applying this procedure to the same number, yet it does a good job of making rounding less biased.