How to calculate the number of decimal digits for a binary number?

Solution 1:

I would post this as a comment if I could, because it seems too simple for an answer. You already said that you can find the highest bit with a single instruction (if the number fits into a register). For every result $r$ you get (that is, your number is between $2^r$ and $2^{r+1}-1$), there are only two possible numbers $k$ and $k+1$ of decimal digits. So you could just use $r$ as an index into a lookup table that stores both $k$ and $10^k$, and output $k$ if your number is less than $10^k$, or $k+1$ otherwise.

Solution 2:

With no better answer, I'll just add a little to what I said in the question to get this closed.

There are series calculations for logarithms (http://en.wikipedia.org/wiki/Logarithm) so I could in principle calculate the base 10 log directly. This would be awkward and slow, though, as for arbitrary precision integers a machine float/double isn't precise enough. I'd probably need a rational library, and the rationals involved wouldn't be simple.

It may be possible to use an imprecise base-converting factor. With an underestimate factor, you'd get an underestimate log value. Divide through by 10^x for that log value, then repeat as often as necessary to top up your underestimate.

The problem is that dividing through by 10^x isn't trivial unless you happen to be working in base 10.

Probably best to stick with repeated division by 10 - or powers of 10, at least. Constructing a list of powers (10^1, 10^2, 10^4, 10^8 and so on) by squaring wouldn't be too hard. Stop when you exceed the target number, then do the divisions largest to smallest. Each power should only need to be considered once.

Even then, it may be better to not build powers of 10 that won't fit in a machine word, and just do repeated division by (small powers of) 10.

Also, this sounds now more like programming than math, so sorry about that.