Intuitive use of logarithms

Logarithms come in handy when searching for power laws. Suppose you have some data points given as pairs of numbers $(x,y)$. You could plot a graph directly of the two quantities, but you could also try taking logarithms of both variables. If there is a power law relationship between $y$ and $x$ like

$$y=a x^n$$

then taking the log turns it into a linear relationship:

$$\log(y) = n \log(x) + \log(a)$$

Finding the exponent $n$ of the power law is now a piece of cake, since it corresponds to the slope of the graph.

If the data do not follow a power law, but an exponential law or a logarithmic law, taking the log of only one of the variables will also reveal this. Say for an exponential law

$$y=a e^{b x}$$

taking the log of both sides gives

$$\log(y) = b x + \log(a)$$

Which means that there will be a linear relationship between $x$ and $\log(y)$.


Since logarithms convert multiplication into addition, they can be used to simplify basic arithmetic in the absence of computers.


I can visualize a logarithm if I think of it as an answer for questions such as these:

  1. "How many places does this number have?"

    • log (log10): given a number in decimal
    • lb (log2): in binary (for a practical application, see this answer on how to detect if integer operations might overflow)
    • and ln (log$e$), well, something in between
  2. "How many levels does a (balanced) tree have which fits this number of leaf nodes?"

    • log: each node has 10 children
    • lb: each node has 2 children (a binary tree)
    • this is mostly helpful if you know something about graph theory. If you're good at visualizing things, you can use this to get a grasp of the approximate value at which a logarithm of a number would be.
    • it also helps to understand why binary search, tree map lookup and quicksort are so fast. The log function plays an important role in understanding algorithm complexity! The tree property can help to find fast algorithms for a problem.

Logarithms are also really helpful in the way that J.M. suggested in the comment to your question: getting minute quantities to a more usable scale.

A good example: probabilities.

In tasks such as speech-to-text programs and many other language-related computational problems, you deal with strings of elements (such as sentences composed of words) where each has an associated probability (numbers between 0 and 1, often near zero, such as 0.00763, 0.034 and 0.000069). To get the total probability over all such elements, i.e. the whole sentence, the individual probabilities are all multiplicated: for example 0.00763 * 0.034 * 0.000069, which yields 0.00000001789998. Those numbers would soon get too small for computers to handle easily, given that you want to use normal 32-bit precision (and even double precision does have its limits and you never know exactly how small the probabilites might get!) If that happens, the results become inaccurate and might even be rounded down to zero, which means that the whole calculation is lost.

However, if you –log-transform those numbers, you get two important advantages:

  1. the numbers stay in a range which is easily expressed in 32-bit floating point numbers;

  2. you can simply add the logarithmic values, which is the same as multiplying the original values, and addition is much faster in terms of processing time than multiplication. Example:

    • –log(0.00763) = 2.11747... (more places are irrelevant)
    • –log(0.034) = 1.46852...
    • –log(0.000069) = 4.16115...
    • 2,11747 + 1,46852 + 4,16115 = 7.74714
    • 10–7.74714 = 0.00000001790028...

    that's really, really close to the original number and we only had to keep track of six places per intermediate number!


The Log function is the inverse of the exponential function. So in short, use logarithms when given the answer to an exponent problem, and wish to know the question.

Exponent problem

"At 2% compound interest, how much will a $1000 investment be worth in 5 years?" (you are given the length of time and want to find the final value of the investment)

Logarithm problem

"At 2% compound interest, how long will it take for $1000 to grow to 1500?" (you are given find the final value of the investment, and want to know the lengh of time)

In short, functions are used to determain the y-coordinate of a graph when given the x-coordinate. Inverse functions (such as the log function) are the other way around. You use these to find the x-coordinate of a graph given the y-coordinate. In the examples above, the x-coordinate is elapsed time and the y-coordinate is the value of the investment.


This is related to Joe's answer, but I want to emphasize something else. The main point I want to make is that if $n^\text{th}$ roots are intuitive, then so are logarithms.

If you want to solve the equation $x^5=11$, you take a fifth root: $x=\sqrt[5]{11}$. But what does $\sqrt[5]{11}$ mean? Well, it means the unique real number $x$ such that $x^5=11$, so this alone doesn't tell you much. To get somewhat of an intuitive feel for it, you can look at the curve $y=x^5$, notice that it is always increasing, and that it crosses $y=11$ somewhere between $1$ and $2$ (because $1^5=1$ and $2^5=32$). But ultimately the definition of $\sqrt[5]{\ }$ relies on the notion of inverting the more familiar operation of multiplying 5 copies of a number, $x\mapsto x^5$.

What if you want to solve the equation $5^x=11$? You take a logarithm with base $5$, $x=\log_5(11)$. Again this is a solution by definition: $\log_5(11)$ is the unique real number $x$ such that $5^x=11$. To get a more intuitive feel for it, you can look at the curve $y=5^x$, notice that it is always increasing, and that it crosses $y=11$ somewhere between $1$ and $2$ (because $5^1=5$ and $5^2=25$). Ultimately the definition of $\log_5$ relies on the notion of inverting the more familiar operation of raising $5$ to a power, $x\mapsto 5^x$.

One way to get an intuitive feeling for the properties of logarithms is to see how the properties are derived from the more intuitive exponential function. For example, there is the familiar rule of exponents, $5^{a+b}=5^a\cdot 5^b$. This equation implies by the definition of the logarithm that $a+b=\log_5(5^a\cdot 5^b)$. On the other hand, $a=\log_5(5^a)$ and $b=\log_5(5^b)$ also by the definition of the logarithm, so the exponential identity becomes the logarithmic identity $\log_5(5^a)+\log_5(5^b)=\log_5(5^a\cdot 5^b)$. When $a$ and $b$ range over the real numbers, this implies the general product-to-sum identity, $\log_5(uv)=\log_5(u)+\log_5(v)$.

Everything comes from "switching $x$ and $y$," as Joe said. For example, every time $1$ is added to $x$, $y=5^x$ increases by a factor of five: $5^{x+1}=5\cdot 5^x$. Therefore, for the inverse, every time $x$ increases by a factor of $5$, $1$ is added to $y=\log_5(x)$.

Part of the answer to your question to when we would say "Let's take the $\log$" is whenever we are trying to solve for a quantity in an exponent, echoing Joe's answer somewhat (but devoid of the practical context given there). If you want to know when $(2t+1)^3 = 10$, a good first step is to say, "Let's take the cube root!" Analogously, if you want to know when $3^{2t+1}=10$, a good first step is to say, "Let's take the $\log$!"