If $a_{1}=1$ and $a_{n+1}=1+\frac{n}{a_{n}}$, then $a_{n}=\sqrt{n}+\frac{1}{2}-\frac{1}{8\sqrt{n}}+o\left(\frac{1}{\sqrt{n}}\right)$
let sequence $\{a_{n}\}$ such $$a_{1}=1,a_{n+1}=1+\dfrac{n}{a_{n}}$$
show that: $$a_{n}=\sqrt{n}+\dfrac{1}{2}-\dfrac{1}{8\sqrt{n}}+o\left(\dfrac{1}{\sqrt{n}}\right)?$$
I will present two approaches to this question. The first is less advanced and easier but also yields a less optimal result.
First Method
Consider the auxiliary sequence $b_n =\sqrt{n+\frac{1}{4}}+\frac{1}{2}$. This is the unique positive solution to the $P_n(X)=0$ where $P_n(X)=X^2-X-n$.
${\bf Lemma~1.}$ For every $n\geq1$, we have $b_{n+1}\geq 1+\dfrac{n}{b_{n-1}}$.
Proof. Indeed, let $u=1+\frac{n}{b_{n-1}}$ then $$u^2-u=\frac{(n+b_{n-1})n}{b_{n-1}^2}=\frac{(n+b_{n-1})n}{b_{n-1}+n-1}= n+\frac{n}{b_{n-1}+n-1}\leq n+1$$ since $b_{n-1}\geq 1$. This shows that $P_{n+1}(u)\leq0$ and consequently $u\leq b_{n+1}$ as desired.$\qquad \square$
${\bf Lemma~2.}$ For every $n\geq1$, we have $b_{n-1}\leq a_n \leq b_n$.
Proof. This is now an easy induction. Clearly true for $n=1$, and if it is true for some $n$ then $$ b_n=1+\frac{n}{b_n}\leq 1+\frac{n}{a_n}\leq 1+\frac{n}{b_{n-1}}\leq b_{n+1}. $$ and the result follows.$\qquad \square$
It follows that $$ \sqrt{n-\frac{3}{4}}-\sqrt{n}\leq a_n-\sqrt{n}-\frac{1}{2}\leq \sqrt{n+\frac{1}{4}}-\sqrt{n} $$ Thus $$\sqrt{n}\left\vert a_n-\sqrt{n}-\frac{1}{2}\right\vert\leq \frac{3}{4(1+\sqrt{1-3/(4n)})}\leq \frac{1}{2}$$ So, we have proved that, for every $n\geq 1$ we have $$ a_n= \sqrt{n}+\frac{1}{2}+{\cal O} \left(\frac{1}{\sqrt{n}}\right) $$ This is not the desired expansion but it has the merit to be easy to prove.
Second Method
The general reference in this part is the book of D. Knuth. "The art of Computer programming, Vol III, second edition, pp.63--65".
Let $I(n)$ be the number of involutions in the symmetric group $S_n$, ($i.e.$ $\sigma\in S_n$ such that $\sigma^2=I$). It is well-known that $I(n)$ can be calculated inductively by $$ I(0)=I(1)=1,\qquad I(n+1)=I(n)+nI(n-1) $$ This shows that our sequence $\{a_n\}$ is related to these numbers by the formula $$a_n=\frac{I(n+1)}{I(n)}.$$
So, we can use what we know about these numbers, In particular, the following asymptotic expansion, from Knuth's book: $$ I(n+1)=\frac{1}{\sqrt{2}} n^{n/2}e^{-n/2+\sqrt{n}-1/4}\left(1+\frac{7}{24\sqrt{n}}+{\cal O}\left(\frac{1}{n^{3/4}}\right)\right). $$ Now, it is a "simple" matter to conclude from this that $a_n$ has the desired asymptotic expansion.
Edit: In fact the term $\mathcal{O}(n^{-3/4})$ effectively destroys the asymptotic expansion as mercio noted, But in fact we have $$ I(n+1)=\frac{1}{\sqrt{2}} n^{n/2}e^{-n/2+\sqrt{n}-1/4}\left(1+\frac{7}{24\sqrt{n}}+{\cal O}\left(\frac{1}{n}\right)\right). $$ This is shown by WIMP AND ZEILBERGER in their paper ``Resurrecting the Asymptotics of Linear Recurrences'', that can be found here.
It will be simpler in what follows to use two steps at once, so we can start by computing the relationship $a_{n+2} = \frac{a_n(n+2) + n}{a_n + n}$, and letting $b_n = a_n / \sqrt n$ we get $b_{n+2} = \frac {b_n (1 + 2/n) + 1/\sqrt n}{b_n\sqrt{1/n+2/n^2} + \sqrt{1+2/n}}$.
Let's denote $\begin{bmatrix} a & b \\ c & d \end{bmatrix} x = \frac {ax+b}{cx+d}$.
If we have a relation $b_{n+2} = \begin{bmatrix} 1+A & B \\ C & 1+D \end{bmatrix} b_n$ where each $A,B,C,D$ is an $O(1/\sqrt n)$ and we look then at some $c_n = (b_n-k)\sqrt n$, we obtain on the sequence $c_n$ the same kind of relation (so with dominant term still begin $I_2$) with
$$A' = (1+A-kC)\sqrt{1+2/n}-1, \\B' = (B-kD+kA-k^2C)\sqrt{n+2}, \\
C' = C/\sqrt n, \\ D' = D+kC$$.
Right now, $C$ is still as big as everyone else, so to make $B'$ not an order of magnitude bigger than the rest, we have to solve a degree $2$ equation in $k$, which gives $k= \pm 1$.
If we define $c_n = (b_n-1)\sqrt n$, we get $A' = -n^{-1/2} + O(n^{-1}), D' = +n^{-1/2} + O(n^{-1}), C' = O(n^{-1}), B' = O(n^{-1/2})$.
From now $C'$ will continue to plummet, the dominant terms of $A'$ and $D'$ will not be able to change, and so to make the next $B$ into an $O(n^{-1/2})$ term, we will have to pick $k = $ the constant term of $B/(A-D)$.
What this tells us is that there are exactly two asymptotic developments for $a_n$ whose error terms at each level obey a recurrent relation of the form $r_{n+2} = [I_2 + O(n^{-1/2})] r_n$.
Since all of this can be done algebraically, those two power series are $\sqrt n$ times the two solutions in $\Bbb Q[[n^{-1/2}]]$ to the original equation on $a_n/\sqrt n$ (and you go from one to the other by switching the sign of $\sqrt n$).
Now we can look at $u_n = ((((a_n/\sqrt n - 1)\sqrt n - \frac 12)\sqrt n + \frac 18)\sqrt n + \frac 18)\sqrt n$. It satisfies $u_{n+2} = \begin{bmatrix}1 - n^{-1/2} + O(n^{-1}) && \frac 7 {64}n^{-1} + O(n^{-3/2}) \\ O(n^{-5/2}) && 1 + n^{-1/2} + O(n^{-1}) \end{bmatrix} u_n$, (and each constant in those $O$ can be effectively computed)
This means that if you have $u_n$ close enough to $0$, then $u_{n+2} = (1-2n^{-1/2})u_n + O(n^{-1})$. So it should mean that for each bound $\delta$ there is an $n_0$ such that $\forall n > n_0, |u_n < \delta | \implies |u_{n+2} < \delta|$.
If you effectively compute $n_0$ for, say, $\delta = 1$, this allows you to provably check that a sequence $u_n$ stays bounded by finding a term $u_n$ with $n$ large enough that is below the $\delta$. If the asymptotic development is valid, a finite computation will be able to prove it.
I would love to see a proof that the asymptotic development is valid for every starting value of $a_n$ (except the one value that will stay away infinitely from the asymptotic formula and will instead obey the other asymptotic development)