Assume the tridiagonal matrix $T$ is in this form:

$$ T = \begin{bmatrix} a & c & & & &\\ b & a & c & & &\\ & b & a & c & &\\ & & &\ddots & &\\ & & & b & a & c\\ & & & & b & a\\ \end{bmatrix} $$

we must show that its eigenvalues are of the form

$$a + 2 \sqrt{bc} \, \cos \left( \frac{k \pi}{n+1} \right)$$

where $$a=qh^2−1, ~~ b=1- \frac{ph}{2}, ~~ c =1+\frac{ph}{2} , ~~q \leq 0.$$


I am not sure that this s entirely rigorous (!!!) but here's an idea.

Let $T_n$ be your tridiagonal matrix of order $n$, and let $S_n=T_n-\mathbb{I}\sigma$. Let $d_n$ be the determinant of $S_n$. Solving $d_n=0$ gives the desired eigenvalues $\sigma_1,\dots,\sigma_n$. Developing $d_n$ with Laplace's rule and letting $a'=a-\sigma$, you have the recurrence relation $d_{n+1}=a'\cdot d_{n}-bc\cdot d_{n-1}$. You can assume $d_0=1$ and $d_1=a'$. You can now set up the following system: \begin{equation} \left(\begin{array}{c} d_{n+1}\\ d_{n} \end{array}\right) = \left( \begin{array}{cc} a' & -bc \\ 1 & 0 \end{array} \right) \left(\begin{array}{c} d_n\\ d_{n-1} \end{array}\right) \end{equation} Let $v_{n}=(d_{n+1},d_n)$ and $A$ the above matrix. So you have \begin{equation} v_n=Av_{n-1}=A^2v_{n-2}=\cdots=A^nv_0 \end{equation} where $v_0=(d_1,d_0)=(a',1)$. Your problem now is getting $A^n$. You can use the Cayley-Hamilton theorem to write \begin{equation} A^n=\alpha A+\beta\mathbb{I} \end{equation} To find $\alpha$ and $\beta$ you need the eigenvalues of $A$, let them be $\lambda_1$ and $\lambda_2$. Then, since the above equation is obtained from the characteristic polynomial, it must be satisfied also by the eigenvalues, so the following system holds: \begin{equation} \lambda_1^n=\alpha \lambda_1+\beta\\ \lambda_2^n=\alpha \lambda_2+\beta \end{equation} If the two eigenvalues are different, you get $\alpha=\frac{\lambda_1^n-\lambda_2^n}{\lambda_1-\lambda_2}$ and $\beta=-\lambda_1\lambda_2\frac{\lambda_1^{n-1}-\lambda_2^{n-1}}{\lambda_1-\lambda_2}$, so \begin{equation} v_n=\alpha Av_0+\beta v_0\quad\Longrightarrow\quad d_n=\alpha a'+\beta = \frac{\lambda_1^n-\lambda_2^n}{\lambda_1-\lambda_2}a'-\lambda_1\lambda_2\frac{\lambda_1^{n-1}-\lambda_2^{n-1}}{\lambda_1-\lambda_2} \end{equation} Simplifying, \begin{equation} d_n=\frac{a'\lambda_1^n-a'\lambda_2^n-\lambda_2\lambda_1^n+\lambda_1\lambda_2^n}{\lambda_1-\lambda_2}=\frac{\lambda_1^n(a'-\lambda_2)-\lambda_2^n(a'-\lambda_1)}{\lambda_1-\lambda_2} \end{equation} Computing the eigenvalues you get $\lambda_1 = \frac{1}{2}(a'+\sqrt{a'^2-4bc})$ and $\lambda_2=\frac{1}{2}(a'-\sqrt{a'^2-4bc})$ so we have $\lambda_1+\lambda_2=a'$, and we can write \begin{equation} d_n=\frac{\lambda_1^{n+1}-\lambda_2^{n+1}}{\lambda_1-\lambda_2} \end{equation} Since the eigenvalues were different, we can suppose without loss of generality that $\lambda_2\neq 0$, so: \begin{equation} d_n=\lambda_2^n\frac{(\lambda_1/\lambda_2)^{n+1}-1}{(\lambda_1/\lambda_2)-1} \end{equation} Now - the original problem was finding the eigenvalues of the big matrix. That means that we must search for the solutions of $d_n=0$, i.e. \begin{equation} (\lambda_1/\lambda_2)^{n+1}=1 \end{equation} Luckily we have invented complex numbers, so \begin{equation} \lambda_1/\lambda_2=e^{i\frac{2k\pi}{n+1}}\qquad\Longrightarrow\qquad\lambda_1=\lambda_2e^{i\frac{2k\pi}{n+1}}\qquad \text{for }k\in[1\cdots n] \end{equation} Note that we cannot let $k=0$ because we supposed $\lambda_1\neq\lambda_2$. Calling $z=e^{i\frac{2k\pi}{n+1}}$ and substituting the expression for $\lambda_1$ and $\lambda_2$ we get \begin{equation} a'+\sqrt{a'^2-4bc}=(a'-\sqrt{a'^2-4bc})z \end{equation} or \begin{equation} a'\cdot(z-1)=\sqrt{a'^2-4bc}\cdot(z+1) \end{equation} Squaring, \begin{equation} a'^2(z-1)^2=a'^2(z+1)^2-4bc(z+1)^2\Longrightarrow a'^2=\frac{(z+1)^2}{z}bc \end{equation} So \begin{equation} a'=\frac{z+1}{\sqrt{z}}\sqrt{bc}=\frac{e^{i\frac{2k\pi}{n+1}}+1}{e^{i\frac{k\pi}{n+1}}}\sqrt{bc} \end{equation} Multiplying by $1=e^{-i\frac{k\pi}{n+1}}/e^{-i\frac{k\pi}{n+1}}$ we get \begin{equation} a'=(e^{i\frac{k\pi}{n+1}}+e^{-i\frac{k\pi}{n+1}})\sqrt{bc} \end{equation} The complex exponential is actually real since $e^{i\theta}+e^{-i\theta}=2\cos\theta$. We had also $a'=a-\sigma$, so the result follows: \begin{equation} \sigma=a-2\cos\left(\frac{k\pi}{n+1}\right)\sqrt{bc}=a+2\cos\left(\pi-\frac{k\pi}{n+1}\right)\sqrt{bc}=a+2\cos\left(\frac{h\pi}{n+1}\right)\sqrt{bc} \end{equation} with $h\in [1,n]$.

EDIT: what happens if $\lambda_1=\lambda_2=\lambda$? In this case, we have a single eigenvalue so the characteristic polynomial of $A$ has the form $p(x)=(x-\lambda)^2$. We see that $p'(x)=2(x-\lambda)x$ so both $p(\lambda)$ and $p'(\lambda)$ are zero. Then, we can consider the system \begin{equation} \lambda^n=\alpha \lambda+\beta\\ n\lambda^{n-1}=\alpha \end{equation} so $\alpha=n\lambda^{n-1}$ and $\beta=(1-n)\lambda^n$, with $\lambda=a'/2$.


Developing the determinant on the first column, then the second minor on its first column, you obtain the key recurrence

$$\Delta_n=a\Delta_{n-1}-bc\Delta_{n-2},$$

with $\Delta_0=1$ and $\Delta_1=a$.

The solution of the recurrence is

$$\Delta_n=\frac{\left(\dfrac{a+\sqrt{a^2-4bc}}2\right)^{n+1}-\left(\dfrac{a-\sqrt{a^2-4bc}}2\right)^{n+1}}{\sqrt{a^2-4bc}}.$$

This expression cancels when

$$\frac{a-\sqrt{a^2-4bc}}{a+\sqrt{a^2-4bc}}=\frac{1-\sqrt{1-\dfrac{4bc}{a^2}}}{1+\sqrt{1-\dfrac{4bc}{a^2}}}$$ is an $(n+1)^{th}$ root of one, let $\omega_k$.

Then

$$\frac{4bc}{a^2}=1-\left(\frac{\omega_k-1}{\omega_k+1}\right)^2=\frac{2\omega_k}{(\omega_k+1)^2}$$ and

$$a=\pm\frac{\omega_k+1}{2\sqrt{\omega_k}}2\sqrt{bc}.$$

The coefficient in $\omega_k$ is real and can be written

$$\cos\frac{k\pi}{n+1}.$$

Then with $a-\lambda$ instead of $a$, we have established

$$\lambda= a\pm\cos\frac{k\pi}{n+1}2\sqrt{bc}.$$


Consider the matrix $S$ which is equal to $T$ except that the diagonal is zero. That is, $T=a I + S$. If we knew the eigenvalues $\lambda_1,\ldots,\lambda_n$ of $S$, the eigenvalues of $T$ are simply $a+\lambda_1,\ldots,a+\lambda_n$.

Now let $$ D=\mathrm{diag}(\delta_1,\ldots,\delta_n), \quad \delta_i=\left(\frac{b}{c}\right)^{\frac{i-1}{2}}, $$ and notice that the matrix $P=D^{-1}SD$ is symmetric. Indeed, $$ p_{i+1,i}=\delta_{i+1}^{-1}b\delta_i=b\left(\frac{b}{c}\right)^{\frac{i}{2}+\frac{i-1}{2}}=\sqrt{bc}, \quad p_{i,i+1}=\delta_i^{-1}c\delta_{i+1}=\cdots=\sqrt{bc}. $$ Hence $$ T=\alpha I + S \sim \alpha I + P = \alpha I + \sqrt{bc} Q, $$ where $Q$ is zero everywhere except the ones in above and below the main diagonal and hence the eigenvalues of $T$ are $\alpha+\sqrt{bc}\mu$, where $\mu$ is an eigenvalue of $Q$.

Now assume that a vector $v=(v_i)$ is an eigenvector of $Q$ associated with the eigenvalue $\mu$. We have $$\tag{1} v_{i-1}+v_{i+1}=\mu v_i, \quad i=2,\ldots,n-1, \quad v_2 = \mu v_1, \quad v_{n-1} = \mu v_n. $$ Assume $v_i$ in the form $v_i=\alpha\sin i\theta+\beta\cos i\theta$. We have $$ \begin{split} v_{i-1}+v_{i+1} &= \alpha[\sin(i-1)\theta+\sin(i+1)\theta]+\beta[\cos(i-1)\theta+\cos(i+1)\theta] \\&= \alpha[\sin i\theta\cos\theta-\cos i\theta\sin\theta+\sin i\theta\cos\theta+\cos i\theta\sin\theta] \\&+ \beta[\cos i\theta\cos\theta+\sin i\theta\sin\theta+\cos i\theta\cos\theta-\sin i\theta\sin\theta] \\&= 2\alpha\sin i\theta\cos\theta + 2\beta\cos i\theta\cos\theta =2\cos\theta v_i. \end{split} $$ This yields $\mu=2\cos\theta$. It remains to find $\theta$; for this we use the boundary conditions in (1). So with $v_i=\alpha\sin\theta+\beta\cos\theta$, $$ v_2=\mu v_1\quad\Leftrightarrow\quad \alpha\sin 2\theta+\beta\cos 2\theta=2\cos\theta(\alpha\sin\theta+\beta\cos\theta), $$ $$ v_{n-1}=\mu v_n\quad\Leftrightarrow\alpha\sin(n-1)\theta+\beta\cos(n-1)\theta=2\cos\theta(\alpha\sin n\theta+\beta\cos n\theta). $$ This gives a system for $\alpha$ and $\beta$: $$ A\begin{bmatrix}\alpha\\\beta\end{bmatrix}=0, $$ where $$ \begin{split} A&=\begin{bmatrix} \sin 2\theta-2\sin\theta\cos\theta&\cos 2\theta-2\cos^2\theta\\ \sin(n-1)\theta-2\sin n\theta\cos\theta&\cos(n-1)\theta-2\cos n\theta\cos\theta \end{bmatrix} \\&= \begin{bmatrix} 0 & -1 \\ -\sin(n+1)\theta & * \end{bmatrix}. \end{split} $$ The first equation gives $\beta=0$. We look for a nonzero $\alpha$; for this we must have $\sin(n+1)\theta=0$. This gives $$ \theta:=\theta_k=\frac{k\pi}{n+1}, \quad \mu_k=2\cos\frac{k\pi}{n+1}. $$ Hence the eigenvalues of $T$ are $$ a+\sqrt{bc}\mu_k=a+2\sqrt{bc}\cos\frac{k\pi}{n+1}, \quad k = 1,\ldots,n. $$


Once you get a recursion formula there are at least three different ways to solve the problem. Using the Z transform, using Chebyshev polynomials, and using the traditional (which I include here) method to solve difference equations. This document illustrate this. From it, I will rewrite here one of the methods.

The eigenvalues of $U$ are roots of the characteristic polynomial That is \begin{eqnarray*} p(\lambda) = \det( U - \lambda I) = 0. \end{eqnarray*}

That is, \begin{eqnarray*} p_n(\lambda) = \det \begin{pmatrix} a-\lambda & c & \\ b & a-\lambda & c \\ & b & a-\lambda & c \\ & & \ddots &\ddots &\ddots \\ & & & \ddots &\ddots &\ddots \\ & & & & b & a-\lambda & c \\ & & & & & b & a-\lambda \end{pmatrix} = 0 \end{eqnarray*} where $n$ is the number of rows (or columns) of $U$. At the moment we assume $n \ge 2$. We expand the determinant through the first row to find

\begin{eqnarray*} p(\lambda) = (a- \lambda) p_{n-1} - c \det B \end{eqnarray*} where $B$ is the matrix

\begin{eqnarray*} \begin{pmatrix} b & c & \\ & a-\lambda & c \\ & b & a-\lambda & c \\ & & \ddots &\ddots &\ddots \\ & & & \ddots &\ddots &\ddots \\ & & & & b & a-\lambda & c \\ & & & & & b & a-\lambda \end{pmatrix} \end{eqnarray*} Then we expand the determinant of $B$ through the first column to find \begin{eqnarray*} p_{n}(\lambda) = (a- \lambda) p_{n-1} - bc p_{n-2} , \end{eqnarray*} and \begin{eqnarray} p_{n}(\lambda) - (a- \lambda) p_{n-1} - bc p_{n-2} = 0. (1) \end{eqnarray} This is a difference equation. We want to find an analytic solution for the function $p_n(\lambda)$ which is valid for all $n$. This equation needs two initial conditions due to the fact that $p_n$ depends on two previous instances $p_{n-1}$ and $p_{n-2}$. That is, for $n=1$, $p_1(\lambda)=a-\lambda$ and for $n=2$ \begin{eqnarray} p_2(\lambda)= \det \begin{pmatrix} a - \lambda & c \\ b & a-\lambda \end{pmatrix} = (a-\lambda)^2 - bc, (2) \end{eqnarray} This could be the initial conditions. All other values of $p_n(\lambda)$ could be found starting with these two initial conditions on the recursive equation (1). However, and to simplify operations we can state simpler initial conditions. Although there are no matrices with $n=0$ rows we could define $p_0(\lambda)=1$, and keep the initial condition $p_1(\lambda)=a - \lambda$. Using the recursion (1)

we find that \begin{eqnarray*} p_2(\lambda) = (a - \lambda)^2- bc \end{eqnarray*} which is exactly equation (2). The solution of $p_n(\lambda)$ for higher $n$ values will not change. We then have the initial conditions \begin{eqnarray} p_0(\lambda)=1 \quad , \quad p_1(\lambda)=a -\lambda. (3) \end{eqnarray}

We apply the theory about difference equations to obtain the solution of our problem. Returning to equation (1) and replacing $p_i(\lambda)$ by $t^i$ we find the characteristic equation \begin{eqnarray*} t^n - (a- \lambda) t^{n-1} - bc t^{n-2} = t^{n-2}[t^2 - (a - \lambda) t - bc] = 0. \end{eqnarray*} That is, we need to solve the quadratic equation \begin{eqnarray} t^2 - (a - \lambda) t - bc = 0. (4) \end{eqnarray} The two solutions are \begin{eqnarray} t_{\pm} = \frac{(a - \lambda) \pm \sqrt{(a-\lambda)^2 - 4 bc}}{2}. (5) \end{eqnarray} The roots could be equal or different. We consider two cases;

1.

No repeated roots: The general solution $p_n(\lambda)$ can be written as \begin{eqnarray} p_n(\lambda)= c_1 t^n_+ + c_2 t^n_-. \label{pnl} \end{eqnarray} From the initial conditions (3) we have \begin{eqnarray} 1 &=& c_1 + c_2 \nonumber \\ \quad \quad \quad (5a) \\ a - \lambda &=& c_1 t_+ + c_2 t_- . \nonumber \end{eqnarray} This system has solution if its determinant does not vanish. That is, if \begin{eqnarray*} \Delta = \det \begin{pmatrix} 1 & 1 \\ t_+ & t_- \\ \end{pmatrix} = t_- - t_+ = -\sqrt{(a-\lambda)^2 - 4 bc} \ne 0. \end{eqnarray*} Please observe that this is consistent with the fact that $t_+ \ne t_-$. We have then that $(a-\lambda)^2 - 4 bc \ne 0$. To solve system (5a) we observe that $c_2=1-c_1$ and \begin{eqnarray*} a-\lambda = c_1 t_+ + (1-c_1) t_- = c_1(t_+ - t_-) + t_-, \end{eqnarray*} That is, \begin{eqnarray} c_1 = \frac{(a-\lambda) - t_-}{t_+ - t_-} \quad , \quad c_2 = 1-c_1 = \frac{t_+ - (a - \lambda)}{t_+-t_-}, \label{ec1} \end{eqnarray} so that the solution for $p_n(\lambda$) is given by \begin{eqnarray} p_n(\lambda) &=& t^n_+ \frac{(a-\lambda) - t_-}{t_+ - t_-} + t^n_- \frac{t_+ - (a - \lambda)}{t_+-t_-}. \end{eqnarray} Now, since the sum of the roots of equation (5) is given by $t_+ + t_-=(a-\lambda)$ we write \begin{eqnarray*} p_n(\lambda) = t_+^n \frac{t_+ + t_- - t_-}{t_+ - t_-} + t_-^n \frac{t_+ - t_+ - t_-}{t_+ - t_-} &=& \frac{t_+^{n+1} - t_-^{n+1}}{t_+ - t_-}. \end{eqnarray*} This is the analytic solution of (1) with initial conditions (3) . Since we need to know $\lambda$ such that $p_n(\lambda=0)$ we have that the eigenvalues $\lambda$ satisfy \begin{eqnarray} t_+^{n+1} = t_-^{n+1}. (6) \end{eqnarray}

Note that equation $t_+^{n+1} - t_-^{n+1}=0$ is of degree $n+1$ in $\lambda$ since
$t_+^{n+1} - t_-^{n+1} = (t_+ - t_-) p_n(\lambda)$.  We introduced a new root
to the original problem. This root satisfy the equation  $t_+ - t_-=0$.
We are assuming  $t_+ - t_- \ne 0$, $\lambda$ can not be a root
of $\Delta = 0$. Please observe that $\Delta=0$ implies
$(a-\lambda)^2 - 4 bc =0$. That is, the roots


\begin{eqnarray}
  \lambda = a \pm 2 \sqrt{bc}
  \label{tworaices}
\end{eqnarray}
cannot be included in the set of solutions to the problem. Now,
\begin{eqnarray}
  \frac{t_+}{t_-}  
  = \frac{t_+^2}{t_+ t_-}  
  = \frac{t_+^2}{bc} 
  = \left ( \frac{t_+}{\sqrt{bc}} \right)^2
  (7)
\end{eqnarray}
where we used the product of the two roots of the quadratic equation (4)
$t_+ t_- =bc$.  We find, using (6)
\begin{eqnarray*}
  \left ( \frac{t_+}{\sqrt{bc}} \right)^{2n+2} = 1.
\end{eqnarray*}
The solution of this equation corresponds with $2n+2$ roots located uniformly
along the unit circle. These roots are represented by

\begin{eqnarray*}
  \frac{(t_+)_k}{\sqrt{bc}} = \mathrm{e}^{\frac{ k \pi \mathrm{i}}{2n+2}}
  \quad , \quad k=0, 1, \cdots , 2n-1.
\end{eqnarray*}
so that $(t_+)_k=\sqrt{bc} \, \mathrm{e}^{k \pi \mathrm{i}/(2n+2)}$.
From $t_+ t_- = bc$ we find $(t_-)_k=\sqrt{bc} \, \mathrm{e}^{-k \pi
  \mathrm{i}/(2n+2)}$ and from $\mathrm{e}^{\theta} + \mathrm{e}^{-\theta}=2 \cos
  \theta$,


  \begin{eqnarray*}
    (t_+)_k + (t_-)_k = 2 \sqrt{bc} \cos \frac{k \pi}{2n +2} .
  \end{eqnarray*}
  Now, since  $(t_+)_k + (t_-)_k=a-\lambda$ we have that


  \begin{eqnarray*}
    a - \lambda_k = 2 \sqrt{bc} \cos \frac{k \pi }{2n +2}
  \end{eqnarray*}
  and


  \begin{eqnarray*}
    \lambda_k = a - 2 \sqrt{bc} \cos \frac{k \pi }{2n+2}
    \quad , \quad k=0,1, \cdots ,  2n+1.
  \end{eqnarray*}
  However $p_n(\lambda)=0$ only has $n$ roots and we found $2n+2$.
  We already eliminated two roots, corresponding
  to $k=0$ and $k=n+1$. We need to exclude $n$ other roots.
  If we observe equation (7) we see that the function
  $t_+$ was squared when substituting $t_-$ for $\sqrt{cd}/t_+$. 
  This introduced $n$ new roots, corresponding to $k=1,3,5, \cdots, 2n+1$, 
  which are solutions of the equation $(t_+/\sqrt{bc})^{2n+2}$  instead of the
  corresponding equation $(t_+/\sqrt{bc})^{n+1}$ where $t_+$ is not squared.
  With this, the solutions $\lambda_k$ are given by the set

  \begin{eqnarray*}
    \lambda_k = a - 2 \sqrt{bc} \cos \frac{k \pi }{2n+2}
    \quad , \quad k=2,4, \cdots ,  2n,
  \end{eqnarray*}
  or

  \begin{eqnarray*}
    \lambda_k = a - 2 \sqrt{bc} \cos \frac{k \pi }{n+1}
    \quad , \quad k=1,2, \cdots ,  n.
  \end{eqnarray*}

Repeated roots: We have $t_+=t_-=(a-\lambda)/2$. That is, the quadratic equation is a perfect square. The recursive equation (1) has a solution of the form

  \begin{eqnarray*}
    p_n(\lambda) = c_1 t_+^n + c_2 n t_+^n.
  \end{eqnarray*}
  Again, to find $c_1$ y $c_2$ we need to apply the initial conditions to
  find a two-by-two linear system of equations.
  

  \begin{eqnarray*}
    p_0(\lambda) &=& 1 = c_1 \\
    p_1(\lambda) &=&  a-\lambda = (c_1 + c_2 )t_+ 
  \end{eqnarray*}
   and so


  \begin{eqnarray*}
    c_2 = \frac{a-\lambda}{t_+} -1 = 1.
  \end{eqnarray*}
  Then


  \begin{eqnarray*}
    p_n(\lambda) =  t_+^n +  n t_+^n
    = \left ( \frac{a-\lambda}{2}  \right )^n(1 + n).
  \end{eqnarray*}
  From here we find that all eigenvalues are equal.

  Recall that $\Delta=0$ means $\lambda = a \pm 2 \sqrt{bc}$,
  and given that  $\lambda=a$ we have that $b=0$ or $c=0$. That is, the original matrix 
  is lower triangular($b=0$) or upper triangular($c=0$) where all its eigenvalues are sitting
  along the diagonal.

One solution of this problem was given by Yueh [1] where all parameters are complex numbers. It does not uses polynomials, but instead the ring of formal sequences. Actually, Yueh's problem is more general because he finds the eigenvalues and eigenvectors of the matrix

\begin{equation} A_n = \begin{pmatrix} -\alpha+b & c & 0 & 0 & \dots & 0 & 0 \\ a & b & c & 0 & \dots & 0 & 0\\ 0 & a & b & c & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \dots & a & -\beta+b \end{pmatrix}_{n\times n} \end{equation}

where $n$ is the matrix dimension and and all the parameters in the matrix are complex numbers. You can even generalize to arbitrary positions in $\alpha$ and $\beta$, here [2] there is an example where these two parameters are placed in symmetric positions.

The equivalent system of equations to solve is

\begin{equation} \begin{split} u_0 &= 0,\\ au_0 + bu_1 + cu_2 &= (\lambda + \alpha) u_1,\\ \vdots \\ au_{k-1} + bu_k + cu_{k+1} &= \lambda u_k,\\ \vdots \\ au_{N-1} + bu_N + cu_{N+1} &= (\lambda + \beta) u_N, \\ u_{N+1} &= 0, \end{split} \end{equation}

where I have introduced the boundary conditions $u_0=u_n=0$. I I further define $u_j=0$ for $j>n$ then the system of equations is equivalent to the sequence problem

\begin{equation} c\lbrace u_{k+2} \rbrace_{k=0}^\infty + b\lbrace u_{k+1} \rbrace_{k=0}^\infty + a\lbrace u_{k} \rbrace_{k=0}^\infty = \lambda\lbrace u_{k+1} \rbrace_{k=0}^\infty + \lbrace f_{k+1} \rbrace_{k=0}^\infty, \;(1) \end{equation}

where the sequence $f$ is defined by $f_1 = \alpha u_1$ and $f_n=\beta u_n$ and $f_i=0$ otherwise. Once one has everything in terms of sequences, one can use the sum of sequence (component-wise) and the multiplication of sequences (see below) to find the solution of the problem.

The multiplication of sequences is what is called the discrete convolution, where if $x$ and $y$ are two sequences, their product $\star$ yields another sequence $z$ such that $x\star y = z$ and

\begin{equation} z_k = \sum_{i=0}^k x_{k-i}y_i. \end{equation}

(You can see this product also as one of the terms in the Cauchy product rule for polynomials.) With both operations one further needs to introduce the shift sequence $S = \lbrace0,1,0,\dots \rbrace$. The process is then straightforward, multiply (1) with $S^2$ and you get the sequence identity

\begin{equation} u = \frac{(f + c \overline{u_1})S}{aS^2 + (b-\lambda)S + \overline{c}}, \end{equation}

where a scalar $c$ is represented as a sequence as $\overline{c} = \lbrace c,0,\dots \rbrace$. The explicit solution is obtained in a straightforward way by just doing algebra, see [1].

As a final note, one has to be careful that the sequence $(aS^2 + (b-\lambda)S + \overline{c})^{-1}$ exists. This is assured by Theorem 24 p. 49 in [3].

In case the links goes down, this is the bibliographic data of the article:

[1] EIGENVALUES OF SEVERAL TRIDIAGONALMATRICES, Wen-Chyuan Yueh,Applied Mathematics E-Notes, 5 (2005), 66-74.

[2] PT-symmetric tight-binding chain with gain and loss: A completely solvable model. https://arxiv.org/abs/1906.10116.

[3] Sui Sun Cheng. Partial Difference Equations. 1st ed. Taylor and Francis (2003).