Fractions in binary?

As you mentioned, $$6 = {\color{red}1}\cdot 2^2+ {\color{red}1}\cdot 2^1+{\color{red}0}\cdot 2^0 = {\color{red}{110}}_B.$$ Analogously $$\frac{1}{4} = \frac{1}{2^2} = {\color{red}0}\cdot2^0 + {\color{red}0}\cdot 2^{-1} + {\color{red}1}\cdot 2^{-2} = {\color{red}{0.01}}_B.$$

Edit:

These pictures might give you some more intuition ;-) Here $\frac{5}{16} = 0.0101_B$, as the denominator is of form $2^n$, the representation is finite (process ends when you hit zero); $\frac{1}{6} = 0.0010\overline{10}_B$ as the denominator is not of form $2^n$, but the number is rational, so representation is infinite and periodic.

binary fraction 5/15binary fraction 1/6

I hope this helps ;-)


Note: $$\dfrac 14_{\,\text{ ten}} = .25 = \color{blue}{\bf 2} \times 10^{-1} + \color{blue}{\bf 5} \times 10^{-2} $$ $$ \frac14= \dfrac{1}{2^2}_{\,\text{ten}} = 2^{-2} = \color{blue}{\bf 0} \times 2^{-1} + \color{blue}{\bf 1}\cdot 2^{-2} = .01_{\text{ two}}$$


Three basic ways, all seen in binary number systems:

Fixed-Point: One integer holds the "integer part"; another holds the "fractional part". This is simple to store and display, and has very high magnitude and precision with virtually no error, but doing real math with the numbers involved can get hairy. Decimal numbers aren't often seen in this form, but it is a possibility.

Maintained Floating Point: a large integer holds the entire value, and a second smaller number maintains the relative place of the decimal point from the right (or left) side of the number. Much easier to manipulate for mathematical operations, same maintenance of precision, zero error, and used in many implementations of "BigDecimal" object types where the "built-in" floating point mechanisms aren't available. Can be more difficult to represent in base-10 form on-screen. If implemented with normal integer types, this method can be more limited in magnitude than the previous one; instead, many implementations use a byte array to store the number, allowing the numbers to be as big as system memory allows.

Implicit Floating Point: The number is expressed in what amounts to "binary scientific notation". A "mantissa" is stored as an integer, with the decimal point implied to be on the far right. Then the exponent of a power of two is also stored. The actual value of this number is the mantissa, multiplied by two to the power of the exponent. This approach allows for the storage and calculation of truly massive numbers, and modern CPUs are designed with a Floating-Point Unit or FPU (sometimes called a "math co-processor", in the early days of its integration into 486-class CPUs) that accelerates calculations of numbers in this form. However, there are two problems; first, there's a tradeoff between extreme precision and extreme magnitude; the mantissa, and thus the number of digits that can be stored precisely, is fixed, so as magnitude increases, the number of possible decimal places decreases (in the extremes of magnitude you often can't get more granular than the millions place). Second, there's an amount of "rounding error" inherent in using floating-point numbers, with the inherent conversion to binary and back; this can cause errors in calculations requiring exact precision (such as when dealing with money), and so unless the extreme magnitude of a floating-point type is required, it's generally recommended to use a method of representation that does not introduce error.