Does there exist a sequence $\{a_n\}_{n\ge1}$ with $a_n < a_{n+1}+a_{n^2}$ such that $\sum_{n=1}^{\infty}a_n$ converges?

Okay, it is indeed impossible. First we find an $n>1$ such that $a_n$ is strictly positive. The basic idea of the proof is noting that if say $a_n<a_{n+1}+a_{n^2}$, then we can repeat this step to get $a_n<a_{n+2}+a_{(n+1)^2}+a_{n^2+1}+a_{n^4}$ and if we combine the two, we get $2a_n<a_{n+1}+a_{n^2}+a_{n+2}+a_{(n+1)^2}+a_{n^2+1}+a_{n^4}$ and as long as we can insure that the indices remain distinct, then we can bound our infinite sum by the right hand side and thus by an arbitrarily large multiple of $a_n$. Since $a_n$ is strictly positive, this would immediately imply divergence.

Define two functions on the positive integers, $S: m\mapsto m^2$ and $A: m\mapsto m+1$ ($S$ stands for squaring and $A$ for adding). Then recursively define the following sets: $I_0=\{n\}$ and $I_{i+1}=S(I_i)\cup A(I_i)$, i.e. $I_k$ is a $k$-fold composition of $A$s and $S$s applied to $n$. Then it follows immediately from $n>1$ that the smallest element of $I_k$ is $n+k$. I now claim that $I_k$ has $2^k$ elements. To prove it, we use induction. It's certainly true for $I_0$. If it's true for $I_{k}$, then by injectivity of $S$ and $A$, it must be the case that $S(I_{k})$ and $A(I_{k})$ both have $2^{k}$ elements. Suppose they have an element in common. Then we can write that element as a perfect square of a number that's at least $n+k$ and also either as $n+k+1$ (which is impossible - $(n+k)^2>n+k+1$ since $n+k\geq2$ by assumption) or as $u^2+i$ for some integers $u$ and $i$ with $i<k+1$ (i.e. every element of $A(I_{k})$ can be written as either $A^{k+1}(n)$ or $A^iS(x)$ for some $i<k+1$ and $x\in I_{k-i}$). In the former case, Then we find that $i$ is the difference between two squares, the largest of which is at least the square of $n+k$, so $i$ is at least $(n+k)^2-(n+k-1)^2=2(n+k)-1>k+1$, which is a contradiction. Thus $S(I_{k})$ and $A(I_{k})$ are disjoint and $I_{k+1}$ has $2^{k+1}$ elements and our inductive proof is complete.

We now obtain the inequalities $a_n=\sum_{i\in I_0}a_i<\sum_{i\in I_1}a_i<\sum_{i\in I_2}a_i<....$, the only problem is that $I_i$ and $I_j$ may not be disjoint. So define a new sequence of finite index sets $J_k$ by $J_0=I_{0}$ and if $J_k$ has largest element $r$, then we define $J_{k+1}=I_r$ whose smallest element is $n+r$ so that $s<t$ whenever $s\in J_u$ and $t\in J_v$ with $u<v$, so the $J_k$ are distinct finite index sets and they inherit the inequalities $a_n=\sum_{i\in J_0}a_i<\sum_{i\in J_1}a_i<\sum_{i\in J_2}a_i<....$. Now we note that $\sum_{i=1}^{\infty}a_i\geq\sum_{k=0}^{\infty}\sum_{i\in J_k}a_i>\sum_{k=0}^{\infty}a_n$, so the sum diverges.


Suppose such a positive sequence $\{a_n\}_{n \in \mathbb{N}}$ exists.

Then by repeated application of the property $a_n < a_{n+1}+a_{n^2}$ on each term appearing starting from $a_2$, gives: $a_2 < a_3 + a_4 < a_4 + a_5 + a_9 +a_{16} < \ldots $ and so on.

Let, $\{S_k\}_{k\ge 1}$ be the resulting sequence of subscripts formed by such repeated applications. So, $S_1=\{2\},S_2=\{3,4\},S_3=\{4,5,9,16\},\ldots$ and so on.

The idea is to show that these $S_k$'s have no repeating elements, so that $\sum\limits_{n=1}^{\infty} a_n$ must diverge, because any tail of the series must be greater than $a_2 > 0$.

Now, Choose the smallest $l$, where a repetition happens in $S_l$. Then the repeating element must be of the form $m^2$, for some $m \in \mathbb{N}$, which comes from $m \in S_{l-1}$ and $m^2-1 \in S_{l-1}$.

But, then $m^2-1 \in S_{l-1} \implies m^2-2 \in S_{l-2} \implies \ldots \implies m^2-2m+2 \in S_{l-2m +2}$ since, none of $m^2-1,m^2-2,\ldots,m^2-2m+2$ are perfect squares. Thus we must have $l > 2m-2$.

Also, $m \in S_{l-1} \implies m > l-1$.

Which is absurd as $m$ is greater than $2$.

(Q.E.D.)