Let $\{a_n\}$ and $\{b_n\}$ be two integer sequences such that $a_1=b_1=1,$

\begin{align*} a_n=a_{n-1}+b_{n-1},\qquad\qquad b_n =r\,a_{n-1}+b_{n-1}. \end{align*}

Where $r$ is a natural number $>1$. ($r$ is a fixed value)

Write $c_n={b_n}/{a_n}$, how to prove the following limit exist without finding it?

\begin{align*} \lim_{n\to\infty} c_n. \end{align*}


Attempt

I used mathematical induction to prove $\{c_{2n}\}$ is monotonically decreasing and $\{c_{2n+1}\}$ is monotonically increasing.

Then, note that

\begin{align*} c_{n+1}=\frac{b_{n+1}}{a_{n+1}}&=\frac{r\,a_n+b_n}{a_n+b_n}\\ &=r-\frac{r-1}{\frac 1{b_n/a_n}+1}<r \end{align*}

Therefore, $0<c_n<r$.

$\vdots$

Instead of continuing using this method (too complicated), any other faster ways to prove "exist"?

I tried to use the definition of Cauchy sequence and contraction mapping (they are 2 methods), but I don't think they work here.

Any hints or tips are appreciated:)


Solution 1:

Write your formulas in a matrix form:

$$\begin{bmatrix}a_n\\b_n\end{bmatrix} = \begin{bmatrix}1&1\\r&1\end{bmatrix}\begin{bmatrix}a_{n-1}\\b_{n-1}\end{bmatrix}$$ i.e. $$w_n=Aw_{n-1}$$

Notice that if you normalize every vector after applying $A$, that is if you consider $$v_n=\frac{Av_{n-1}}{||Av_{n-1}||}$$ instead $w_n$, then it will generate the same $\{c_n\}$. This is because

  • normalization changes magnitude of the given vector, not its direction
  • $A$ is a linear transformation; changing the magnitude of its input doesn't change the direction of its output
  • and finally $c_n$ depends on the n-th direction only.

This formula for $v_n$ above is exactly what power iteration does. The following citation is useful (it has been translated to match our chosen names and starting index):

If $A$ has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector $v_{1}$ has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence $\{v_n\}$ converges to an eigenvector associated with the dominant eigenvalue.

Think about eigenvalues (say: $\lambda$) and how they depend on $r$. You will have to consider a quadratic equation w.r.t. $\lambda$ and its discriminant, which will be of first degree w.r.t. $r$. By analyzing the two you will find that for $r>1$ there are always two distinct real eigenvalues (and none of them is $0$, this will be important in a moment). Apply proper Vieta's formula to show they cannot sum to $0$. They cannot be equal, they cannot be opposite, so one eigenvalue is strictly greater in magnitude for sure.

If the starting non-zero vector had zero component in the direction of one of the two eigenvectors, then it would be the other eigenvector. Applying $A$ to it would act as multiplying by eigenvalue (which is nonzero). This means $w_1, w_2, w_3, …$ would have the same direction, so $c_1, c_2, c_3, …$ would be equal and $\{c_n\}$ would trivially converge. In our case however $c_1=1$ and $c_2\neq1$. This means $v_1$ has non-zero component in the direction of any of the two eigenvectors; in particular in the direction associated with the dominant eigenvalue.

So the conditions are met, $\{v_n\}$ converges to some eigenvector. It has some direction which corresponds to the limit of $\{c_n\}$.

Q: But wait! If $\begin{bmatrix}0\\1\end{bmatrix}$ was this eigenvector associated with the dominant eigenvalue, wouldn't the limit be $\infty$ or $-\infty$?

A: I think it would, it doesn't matter though. Calculate $A\begin{bmatrix}0\\1\end{bmatrix}$ and you will see it's not an eigenvector.

Solution 2:

Not sure if the following fits the requirement to avoid finding the limit, but anyway... Consider the change of variable $$x_n=c_n-\frac{r}{c_n}$$ then $$x_{n+1}=-f(c_n)x_n$$ with $$f(c)=\frac{(r-1)c}{(c+1)(c+r)}$$ Now, for every $c>0$, $$(c+1)(c+r)\geqslant(r+1)c$$ hence $$f(c)\leqslant s$$ with $$s=\frac{r-1}{r+1}<1$$ Thus, $$|x_n|\leqslant s^{n-1}\cdot|x_1|$$ which implies that $$x_n\to0$$ hence $$c_n\to\text{____}$$

Solution 3:

Again, not sure if this satisfies your criteria, but if we let $f : [0,\infty) \to \mathbb{R}$ by

$$ f(z) = \frac{r+z}{1+z} $$

then $f$ is decreasing, $0\leq f \leq r$, and $c_{n+1} = f(c_n)$. So if we let

$$ \beta = \limsup_{n\to\infty} c_n, \qquad \alpha = \liminf_{n\to\infty} c_n $$

then we have $ \beta = f(\alpha)$ and $\alpha = f(\beta)$, i.e., both $\alpha$ and $\beta$ are non-negative fixed points of $g = f\circ f$. Since this function has a unique non-negative fixed point, this proves that $\alpha = \beta$.