Why do we have 0.0 and -0.0 in Ruby?

RubyFloating PointNegative Zero

Ruby Problem Overview


In ruby why can I assign a negative sign to 0.0 float, is this feature useful in any way? Could someone explain this one to me?

-0.0
#=> -0.0

-0.0 * -1
#=> 0.0

Ruby Solutions


Solution 1 - Ruby

You can assign a negative sign to a 0.0 float in Ruby because all IEEE 754 floating point numbers have a sign bit to indicate whether the number is positive or negative.

Here are the binary representations of 2.5 and -2.5:

[2.5].pack('f').unpack1('b*')
#=> "00000000000000000000010000000010"

[-2.5].pack('f').unpack1('b*')
#=> "00000000000000000000010000000011"

The last bit is the sign bit. Note that all the other bits are identical.

On the other hand, there is zero with a sign bit of 0:

['00000000000000000000000000000000'].pack('b*').unpack1('f')
#=> 0.0

and zero with a sign bit of 1:

['00000000000000000000000000000001'].pack('b*').unpack1('f')
#=> -0.0

Although 0.0 and -0.0 are numerically equal, they are not identical on the object level:

(0.0).eql?(-0.0)   #=> true
(0.0).equal?(-0.0) #=> false

Negative zeros have some special properties. For instance:

1 / 0.0    #=> Infinity
1 / -0.0   #=> -Infinity

Assigning - explicitly is not the only way to get -0.0. You may also get -0.0 as the result of a basic arithmetic operation:

-1.0 * 0 #=> -0.0

Solution 2 - Ruby

Mathematical operations have real-number results, but we map those real results onto the nearest floating-point number, which is called "rounding". For every floating-point number, there is a range of real numbers that will round to that float, and sometimes it's useful to think of the float as being identified with that range of real numbers.

Since there is a finite supply of floating-point numbers, there must be a smallest positive float, and its opposite, the smallest (magnitude) negative float. But what happens to real number results even smaller than those? Well, they must "round to zero". But "a really small number greater than zero" and "a really small number less than zero" are pretty different things with pretty different mathematical behavior, so why should we lose the distinction between them, just because we're rounding? We don't have to.

So, the float 0 doesn't just include the real number 0, it also includes too-small-to-represent positive quantities. And the float -0 includes too-small-to-represent negative quantities. When you use them in arithmetic, they follow rules like "negative times positive equals negative; negative times negative equals positive". Even though we've forgotten almost everything about these numbers in the rounding process, we still haven't forgotten their sign.

Solution 3 - Ruby

It's not a feature of Ruby, but the part of floating point number specification. See this answer. Negative zero is equal positive zero:

-0.0 == 0.0
# => true

Solution 4 - Ruby

An example of when you might need -0.0 is when working with a function, such as tangent, secant or cosecant, that has vertical poles which need to go in the right direction. You might end up dividing to get negative infinity, and you would not want to graph that as a vertical line shooting up to positive infinity. Or you might need the correct sign of a function asymptotically approaching 0 from below, like if you’ve got exponential decay of a negative number and check that it remains negative.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionBruno Alexandre Moreira PinchoView Question on Stackoverflow
Solution 1 - RubyStefanView Answer on Stackoverflow
Solution 2 - RubyhobbsView Answer on Stackoverflow
Solution 3 - RubymrzasaView Answer on Stackoverflow
Solution 4 - RubyDavislorView Answer on Stackoverflow