Number of zeros in difference of exponential sums: $\sum\limits_{i=1}^n a_i^x - \sum\limits_{i=1}^n b_i^x$

Here is an argument (revised, with nontrivial input from the OP) that the number of roots cannot exceed $n$. It has a calculus formula piece, a variation diminishing piece, and a combinatorics piece.

Change the notation somewhat, so $$f(x)=\sum_{i=1}^n e^{a_ix} - \sum_{i=1}^n e^{b_ix},$$ for real $a_i, b_i$. To simplify things later assume all the $a_i$ and $b_i$ are distinct.

Define $G(t)=\#\{i:a_i\le t\}-\#\{i:b_i\le t\}$, where "$\#$" denotes "cardinality of". Then $$f(x)= x \int_{\mathbb R} e^{tx} G(t) dt,$$ as may be seen by writing $$f(x)=\sum_i (e^{a_ix}-e^{b_ix})=\sum_i x\int_{b_i}^{a_i}e^{tx}dt.$$ This is the calculus formula piece of the argument.

The combinatorial argument below shows the function $G$ can have at most $n-1$ changes of sign. Hence (and this is the variation diminishing part of the argument) its bilateral Laplace transform $g(x)=\int_{\mathbb R}e^{tx}G(t)dt$ has at most $n-1$ roots, and so $f(x)=xg(x)$ has at most $n$ roots. (The number of sign changes $S(G)$ in $G$ is defined as the supremum over all increasing sequences $t_1<\cdots<t_k$ of all lengths $k$, of the number of strict sign changes in $G(t_1),\ldots, G(t_k)$, ignoring zero values.)

The function $G$ is piecewise constant, integer valued, continuous on the right, with limits on the left; all its discontinuities are $\pm1$ jumps, which occur exactly at points in $M$, the set of $a_i$ and $b_i$ values. Let $m_1\le m_2\le\cdots\le m_{2n}$ be the elements of $M$ in sorted numerical order. Hence $S(G)$ is equal to the number of sign changes in the particular sequence $G(m_1),\ldots,G(m_{2n})$.

Since $G(m_i)-G(m_{i+1})=\pm1$ for all $i<2n$, the number of sign changes in $G$ is thus the number of subscripts $j$ for which $1<j<2n$ and for which $G(m_{j-1}),G(m_j),G(m_{j+1}))=(1,0,-1)$ or $=(-1,0,1)$. For this to happen $j$ must be even. Since there are $n-1$ even $j$ with $1<j<2n$, we see that $S(G)$ is at most $n-1$.

We can relax the restriction that all the elements of $M$ are distinct by observing that a perturbation of the elements of $M$ that preserves strict inequalities and breaks ties cannot decrease $S(G)$. I hope this example makes clear how this happens. Suppose we have $$a_1<b_1=b_2<a_2<a_3<b_3=b_4<a_4$$ and $$a_1^*<b_1^*<b_2^*<a_2^*<a_3^*<b_3^*<b_4^*<a_4^*$$ with corresponding $G$ and $G^*$ functions; we know $S(G^*)\le n-1$. The calculation of $S(G)$ is summarized in the chart $$ \begin{matrix} m_i:&a_1 & b_1 &b_2 & a_2 & a_3 & b_3&b_4 & a_4\\ G(m_i):& 1 & -1 & -1 & 0 & 1 & -1 & -1 & 0 \end{matrix} $$ (where we see jumps of size $-2$ at $m_2=m_3$, etc) and for $S(G^*)$ in the chart $$ \begin{matrix} m_i:&a_1^* & b_1^* &b_2^* & a_2^* & a_3^* & b_3^*&b_4^* & a_4^*\\ G^*(m_i):& 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 \end{matrix} $$ which, in this case, show the same number of sign changes in the the bottom rows. More generally, to each increasing sequence $t_1<\cdots< t_k$ there corresponds a sequence $t_1^*<\cdots <t_k^*$ so that the sequence of values in $G(t_i)$ is the same as the sequence of values of $G^*(t_i^*)$. So the supremum defining $S(G)$ extends over a subset of those defining $S(G^*)$. Hence $S(G)\le S(G^*)\le n-1$.

The basic variation diminishing (or total positivity) fact used here is due, I suppose, to Schoenberg: a bilateral Laplace transform $f(x)=\int_{\mathbb R} e^{xy}\nu(dy)$ of a signed measure $\nu$ cannot have more sign changes than $\nu$ has. This is more or less equivalent to convolution with the Gaussian kernel having the variation diminishing property. It generalizes Descartes' rule of signs. It is contained in in S. Karlin's magisterial but diffusely organized 1968 book Total Positivity (see pp.233, 237). See Schoenberg, I. J. "On Pólya frequency functions. I. The totally positive functions and their Laplace transforms" J. Analyse Math. 1 (1951), 331–374 (MR0047732); if I come across a more recent and accessible source, I'll add it.