If a floating-point value can also be a whole number, why bother using integers in your programs at all? The reason is that floating-point values and integers are handled differently inside the computer.

An integer exists inside the computer as a true binary value. For example, the value 123 is stored in modern computers as a 32-bit value:

A true binary value.

The sign bit determines whether the value is positive or negative (0 is positive, and 1 is negative). The rest of the 31 bits are used to represent the value.

A floating-point number, however, cannot exist in a computer that uses binary (1s and 0s). Don't be silly! So, the floating-point number is cleverly faked. Using the same 32 bits, a floating-point value of 13.5 might look like this:

A floating-point number stored as a binary value.

First comes the sign bit: 1 for negative or 0 for positive. The exponent is used with the mantissa in a complex and mystical manner to fake floating-point values in binary. (If you're curious, you can search for floating-point binary on the Internet and find some excellent tutorials that may or may not clear it up.)

The bottom line is that it takes more work for the computer to figure out binary problems, like floating-point values, than it does for the computer to work in integers. So, wherever possible, use integer values; use the floating-point numbers only when necessary.

In the early days of C programming, you often had to link in a special floating-point library if your program used floating-point values. Most compilers now can handle floating-point numbers without this extra step.